# Tender Annex
## Audit-ready technical evaluation matrix for smart-city meteorological sensor architectures used in digital-twin and CFD networks

**Prepared for:** technical procurement / smart-city program review  
**Purpose:** architecture selection, tender drafting, and pre-award technical challenge  
**Document type:** audit-ready annex with evidence trail

## 1. What procurement teams should understand before comparing devices

**Compact AIO is not a necessity for smart cities.** Once you count the external logger, hub, solar panel, battery management, heaters, and extra rain/solar modules, many “AIO” offerings are functionally just AWS-style systems with compact sensor heads. The procurement mistake is to compare brochure integration while ignoring the installed system, the siting constraints, and the service architecture. [R1][R2][R11][R16][R17][R21]

Five procurement principles should be accepted before any ranking is attempted:

1. **Urban wind quality is dominated first by exposure and geometry, then by electronics.** FHWA states that if an anemometer is mounted on a light standard or utility pole, it should be mounted on top of the pole to reduce airflow disturbance. Urban meteorological guidance warns that wind observations in cities are highly sensitive to exposure and may need to be separated from the other measurement systems. [R1][R2]

2. **Reference grass-field validation does not automatically transfer to cities.** WMO guidance is built around carefully defined exposure, including shielding from direct and multiply reflected radiation. In reflective urban settings, and also over snow/high-albedo surfaces, passive or weakly ventilated temperature measurements can deviate materially from aspirated references. [R3][R4][R5]

3. **A photo can reveal risk, but the score must still come from physics and evidence.** Official product imagery and manuals show that many compact AIOs place bright lower plates, white lower shield surfaces, or bright bodies close to the sensing volume. That is a valid risk cue. The tender score in this annex is not assigned from the photo alone; it is assigned from the documented shield/aspiration architecture and the independent reflective-façade and snow-albedo literature. [R4][R5][R11][R13][R14][R16][R17][R19]

4. **Non-catching precipitation must not be treated as equivalent to a catching rain gauge in windy urban truth applications.** Recent CFD-based and instrument-class literature shows that body geometry, wind speed, and wind direction can create large bias in non-catching precipitation sensors. This is enough to downgrade compact electroacoustic, optical, piezoelectric, and radar-rain channels for primary truth use unless strong independent windy-condition evidence is provided. [R6][R7][R12][R17]

5. **Installation and service complexity belong in the score.** If a “simple” sensor head still needs an external logger, powered ventilation, a hub, a pole cabinet, a separate solar panel, heater management, or extra rain/solar add-ons, the installed system behaves much more like an AWS than like a lightweight IoT node. The tender should score the full installed system, not the head alone. [R11][R14][R16][R17][R19][R21]

## 2. Scope and scoring method

This annex evaluates **system architectures**, not just sensor heads, because that is the only fair way to compare modular stacks against compact all-in-one stations. Missing measurements score **zero** in the main table because the procurer would still need to buy and install another instrument to complete the weather node.

### 2.1 Use-case assumed by this annex
The scoring assumes:
- dense distributed deployment on or near urban infrastructure,
- wind data is materially important for CFD / digital-twin forcing or calibration,
- data quality must hold up in reflective city conditions, not only in short-grass reference conditions,
- power, autonomy, and field service matter because network size is large.

### 2.2 Weighted criteria

| Criterion | Weight |
|---|---:|
| Wind data quality and siting suitability | 30 |
| Air temperature quality | 10 |
| Relative humidity quality | 8 |
| Pressure quality | 4 |
| Rain quality | 10 |
| Solar irradiance quality | 8 |
| Power / autonomy | 8 |
| Installation / service / total-system cost complexity | 12 |
| Data transparency / diagnostics / maintainability | 10 |
| **Total** | **100** |

### 2.3 How to read the scores
- A difference of **1-3 points** between neighboring solutions should be treated as close until a field trial breaks the tie.
- The scores are **procurement judgments**, not vendor claims.
- The annex intentionally scores **city-CFD suitability**, not only reference-station suitability.

## 3. Device-by-device audit notes

### 3.1 BARANI modular stack
**Configuration reviewed:** MeteoWind IoT Pro + MeteoHelix IoT Pro + MeteoRain 200

BARANI ranks first because it separates the variable that is hardest to keep honest in a compact station: **wind**. MeteoWind IoT Pro is a dedicated autonomous wind node; the current datasheet states **4 Hz wind-speed/gust sensing** and **1 Hz direction**, with solar charging and multi-month battery autonomy. MeteoHelix IoT Pro places temperature, humidity, pressure, and solar in a separate body, and MeteoRain 200 is a dedicated **200 cm² catching gauge** with vendor-published field-performance claims. That architecture matches the exposure logic in FHWA, urban meteorological guidance, and WMO better than a compact AIO in which wind lives under a rain funnel or beside a thermally loaded body. [R1][R2][R8][R9][R10]

The score is not perfect because the strongest evidence for BARANI’s reflective-city thermal performance is still mostly vendor evidence rather than independent third-party urban façade studies. The architecture is still superior for this tender because it allows wind to be mounted where wind should be measured, while T/RH/P/radiation and precipitation can be sited independently. The procurement caution is the **data path**: internal cadence is not automatically equal to delivered cadence, so the tender must explicitly require raw, sub-minute, or aggregated outputs as needed. [R8][R9][R10]

### 3.2 Gill modular AWS-like architecture
**Configuration reviewed:** MetConnect THP + remote wind + external pyranometer + catching rain gauge

This is the best **measurement architecture** in the review if power and installation complexity are ignored. Gill’s MetConnect documentation explicitly supports separate placement of wind, T/H/P, and additional sensors, including user-selected rain and solar sensors. That is exactly what urban siting guidance wants: wind can be mounted high and clean; thermal measurements can be placed where screen-level representativeness is better; and rain/solar can be treated as their own measurement problems. [R1][R2][R11]

It does not win the tender because the installed system is AWS-like. Once remote wind, external pyranometer, catching rain gauge, power/comms cabling, and field hardware are added, this is not a lightweight smart-city node. It is a traditional modular station with good physics but higher installation and service burden. The score is therefore very high for wind and solar, but very low for power/autonomy and installation/service complexity. [R11]

### 3.3 METER ATMOS 41W
ATMOS 41W is the strongest **autonomous compact AIO** in the review, but it drops sharply once the city-specific thermal penalty is applied. The METER manual states that wind is measured in the space underneath the rain gauge using ultrasonic reflections from a lower convex surface. The same manual states that the temperature sensor is **not** protected by a traditional louvered radiation shield and that the final air temperature is an **energy-balance correction** using solar loading and convective cooling. METER’s own documentation reports good comparison against an aspirated reference in its validation context, but this annex did not find equivalent evidence that the correction fully transfers to reflective urban canyons, bright pavements, snow/ice fields, or other harsh radiative environments. [R3][R4][R5][R13][R22]

ATMOS 41W still scores well on autonomy and diagnostics because it is genuinely wireless and solar-powered and because the documentation is unusually transparent about the measurement chain, splash/icing limits, and data intervals. But this annex treats its T/RH performance in cities as **unproven**, not as automatically reference-like. [R13][R22]

### 3.4 Gill MaxiMet GMX551 + supplied Kalyx bucket
Gill MaxiMet moves upward once the review distinguishes it from lower-grade compact stations. The current manual states that GMX501/551 use a **Hukseflux LPO2 thermopile pyranometer** rather than a compact silicon-cell solar sensor, and that GMX550/551 use or are supplied with a **traditional Kalyx tipping bucket** for rainfall rather than relying only on non-catching precipitation. That materially improves both the solar and rain sections of the score. [R12]

The remaining penalties are twofold. First, the thermal and wind bodies are still compact and co-located, so the station does not fully escape the AIO architectural compromise. Second, it is not a low-power autonomous city node: current draw is much higher than the ultra-low-power class, and Gill’s power-saving modes disable or limit wind-averaging behavior that is important for meteorological and CFD use. That places GMX551 between classic AWS architecture and compact AIO convenience. [R12]

### 3.5 OTT/Lufft WS700/WS800 family
OTT/Lufft’s WS family scores better than most compact AIOs on temperature and humidity because the family offers **aspirated / ventilated radiation protection**, which is exactly what reflective-city physics would suggest. It also offers stronger solar options than the lowest-end silicon AIO class, and the wind channel is internally measured at high rate. [R3][R17][R18]

The problem is that these benefits are **power conditional**. The WS manuals state that in power-saving modes the ventilation of the temperature/humidity unit is switched off, heaters are switched off, and radar precipitation sensing is no longer continuous. In practice that means the attractive thermal architecture is only fully realized when the station is fed like an AWS, not when it is pushed into a low-power node regime. Rain also remains mixed because radar precipitation in the family carries looser accuracy expectations than a conventional catching gauge. [R17][R18]

### 3.6 Campbell ClimaVue 50 G2 + host/logger
Campbell is unusually transparent about ClimaVue’s compromises, which helps it in an audit. The manual states that wind is measured underneath the rain gauge using acoustic reflection from a plate, that the thermistor sits in the center of the anemometer **without** solar radiation shielding, that RH depends on the corrected temperature path, and that the solar channel is a **silicon-cell pyranometer** integrated into the rain-funnel lip. Campbell’s temperature-correction paper also reports uncorrected errors up to about **3 °C** versus an aspirated reference before correction. [R14][R15]

Those disclosures are exactly why the score lands in the middle rather than at the bottom. Campbell documents the system well, including vector-based wind treatment and practical polling constraints, but the architecture remains a compact, model-corrected AIO that still needs a host/logger. It is therefore a defensible instrument for many operational purposes, but not the preferred truth layer for urban CFD. [R14][R15]

### 3.7 Vaisala WXT536 base
The WXT536 stays in the review because it is a mature compact transmitter and some buyers will still shortlist it. Vaisala documents **1, 2, or 4 Hz** wind sampling, a short response time, pressure/T/RH/wind/rain coverage, and the possibility of external analog inputs. The same manual also documents two important limits for this tender: wind averages are **scalar averages**, and the naturally aspirated radiation shield can affect readings in calm wind. Vaisala also warns that the transmitter’s own shield can reflect enough light to disturb nearby pyranometers or temperature sensors, recommending separation when a pyranometer is added. [R16]

That mix keeps WXT536 competitive as a compact transmitter but not as a city-truth winner. In its base form it has **no solar measurement**, and once external solar or other modules are added it becomes another AWS-like expanded system. The compact electroacoustic rain channel also remains less trustworthy in windy urban use than a dedicated catching gauge. [R6][R7][R16]

### 3.8 R.M. Young ResponseONE-Pro base
ResponseONE-Pro is better than many compact stations on the **wind data path**. The current manual states that the sonic subsystem internally samples wind at more than **200 times per second** and can output either polar or Cartesian wind formats. It also provides terminals for an optional tipping-bucket rain gauge. That makes it a serious compact instrument, not a lightweight consumer-style AIO. [R19]

It still scores low in the full-system matrix because the base unit lacks integrated rain and solar, and because once those are added it becomes another externally instrumented AWS-like station. The R.M. Young blog on all-in-one stations is worth noting because it openly lists the same trade-offs this annex scores: compromises in performance, power, environmental limitations, and replacement burden. [R19][R20]

### 3.9 Gill MetConnect One base
MetConnect One is Gill’s compact AIO family member. It offers good output flexibility and stronger data-path transparency than many peers, and it can integrate additional sensors. But the base unit lacks rain and solar, so in a full-system procurement table those channels score zero unless the station is expanded. [R11]

Once expanded, MetConnect One inherits the same AWS-like complexity that affects the larger Gill modular architecture: more sensors, more wiring, more brackets, and more field service burden. Its value is therefore as a flexible, well-documented platform rather than as the simplest smart-city node. [R11]

### 3.10 Milesight WTS506
Milesight remains in the annex because it illustrates one of the most persistent smart-city misconceptions: that a compact-looking weather station is automatically simpler and cheaper to install. The current documentation states that WTS506 consists of **three main parts** - the sensor, the hub, and the solar panel - and the user guide says the system is **not intended to be used as a reference sensor**. [R21]

Those two facts are enough to keep it out of the primary truth layer. WTS506 can still be useful for operational awareness, alarms, or lower-stakes monitoring, but this annex does not support its use as a primary CFD-calibration weather node. [R21]

## 4. Full-system tender matrix

**Scoring rule:** missing measurements score zero in this table because the procurer would still need to add another instrument. Short names are used here; the full configurations are defined in Section 3.

| System architecture | W30 | T10 | RH8 | P4 | R10 | Sol8 | Pow8 | Cost12 | Data10 | **Total** |
|---|---:|---:|---:|---:|---:|---:|---:|---:|---:|---:|
| BARANI modular stack | 27 | 8 | 7 | 4 | 9 | 5 | 8 | 9 | 6 | **83** |
| Gill modular AWS-like | 28 | 6 | 5 | 4 | 8 | 8 | 1 | 1 | 9 | **70** |
| METER ATMOS 41W | 17 | 3 | 3 | 3 | 5 | 5 | 7 | 8 | 8 | **59** |
| Gill GMX551 + Kalyx bucket | 18 | 4 | 4 | 4 | 7 | 8 | 3 | 3 | 7 | **58** |
| OTT/Lufft WS700/800 family | 17 | 6 | 5 | 4 | 4 | 4 | 2 | 2 | 8 | **52** |
| Campbell ClimaVue 50 G2 + logger | 16 | 3 | 3 | 3 | 6 | 4 | 4 | 3 | 8 | **50** |
| Vaisala WXT536 base | 18 | 4 | 4 | 4 | 4 | 0 | 2 | 3 | 7 | **46** |
| R.M. Young ResponseONE-Pro base | 19 | 4 | 4 | 4 | 0 | 0 | 3 | 3 | 8 | **45** |
| Gill MetConnect One base | 18 | 4 | 4 | 4 | 0 | 0 | 3 | 3 | 9 | **45** |
| Milesight WTS506 | 10 | 4 | 3 | 3 | 3 | 0 | 5 | 3 | 3 | **34** |

### Procurement reading of the table
- **BARANI modular stack** ranks first because it is the best fit to a dense, low-power city network where wind must remain defensible for CFD use. [R1][R2][R8][R9][R10]
- **Gill modular AWS-like** ranks second because it is arguably the strongest pure measurement architecture, but its installed complexity is much closer to an AWS than to an autonomous IoT node. [R11]
- **METER ATMOS 41W** is the best compact autonomous AIO, but the city-transfer risk of its temperature/RH correction keeps it below modular architectures. [R4][R5][R13][R22]
- **Gill GMX551 + Kalyx bucket** is the strongest compact AIO-like solution in this review for solar + rain, but it is not a true low-power autonomous node. [R12]
- **Vaisala WXT536, ResponseONE-Pro base, MetConnect One base, and Milesight WTS506** remain relevant comparators, but each is limited either by missing channels, compact-architecture trade-offs, or full-system complexity. [R16][R19][R21]

## 5. Why the scores look this way

### 5.1 Wind
Wind carries the highest weight because that is the variable most likely to materially affect digital-twin / CFD decisions. The scoring prioritizes **siting freedom** and **geometry cleanliness** over brochure precision. A clean dedicated wind node or a modular architecture that allows wind to be sited independently scores higher than a compact AIO with wind under a rain gauge or close to other bodies. Independent urban guidance and FHWA both support this logic. The NOAA PMEL note on WXT520 adds a cautionary example showing that compact all-in-one geometry can create sector-dependent wind behavior that is not captured by one headline accuracy number. [R1][R2][R7]

### 5.2 Air temperature and relative humidity
The temperature/RH scores in this annex are intentionally harsh on reflective-city risk. WMO notes that temperature shielding must reject direct and reflected radiation and that aspirated shields are preferred for high-quality work. The independent façade and snow-albedo studies show that radiative environment can change errors by degrees Celsius even when the sensing element itself is unchanged. This is why compact passive or model-corrected AIOs are downgraded, especially where the vendor validation is tied to benign reference-like exposures rather than reflective urban canyons. RH is coupled to the same issue because many compact stations compute humidity using the corrected air-temperature path. [R3][R4][R5][R13][R14][R15][R16]

### 5.3 Pressure
Pressure does not drive the award. Most of the compared instruments provide acceptable barometric performance for this use case, so pressure receives a low weight. [R8][R9][R13][R14][R16][R17][R19][R21]

### 5.4 Rain
Rain is scored very differently from how many vendor brochures imply. Compact **non-catching** rain is convenient, but recent literature on non-catching precipitation classes shows that wind and body geometry can produce large bias. This annex therefore favors dedicated catching gauges, supplied tipping buckets, or clearly separable rain architectures over compact piezoelectric, optical, electroacoustic, or radar-only rain channels when the application is truth data rather than convenience. [R6][R7][R10][R12][R17]

### 5.5 Solar irradiance
Solar irradiance is included explicitly because the radiation channel affects both data products and temperature correction pathways. The review distinguishes **thermopile** pyranometers from compact **silicon-cell** implementations. Gill’s GMX501/551 receives a strong solar score because the current manual states it uses a **Hukseflux LPO2 second-class thermopile pyranometer**. Campbell and METER document silicon-cell pyranometers, which are useful but not equivalent to a thermopile-class solar channel in an audit-sensitive station. [R12][R13][R14]

### 5.6 Power and autonomy
This category rewards genuinely autonomous node behavior, not just low head-unit current. BARANI MeteoWind/MeteoHelix and METER ATMOS 41W perform well here because they integrate solar charging, battery operation, and telemetry in a fieldable node. Systems that need an external logger, hub, pole cabinet, or heavier power subsystem lose points even when the head unit itself is marketed as compact. [R8][R9][R21][R22]

### 5.7 Installation, service, and total-system cost complexity
This score is architecture-based rather than price-list-based. The annex intentionally does **not** use promotional field-service anecdotes as primary evidence. Instead it scores whether the system needs extra solar panels, hubs, data loggers, powered ventilation, extra rain/solar modules, heater wiring, or more elaborate mounting. That is the most robust predictor of real urban installation mandays, service truck rolls, and hidden integration cost. [R11][R14][R16][R17][R19][R21]

### 5.8 Data transparency, diagnostics, and maintainability
Well-documented data paths, explicit output modes, vector/scalar disclosure, serviceable modules, and status outputs are rewarded. Vendor opacity about on-device filtering, gust calculation, or missing-data behavior is penalized. Gill, Campbell, METER, OTT/Lufft, and R.M. Young generally score well here because their manuals are technically explicit even where the architecture is not ideal. [R11][R12][R13][R14][R17][R19]

## 6. Partial-coverage supplementary devices (not scored in the full-system matrix)

These devices are worth keeping in a smart-city reference guide even though they are not full replacements for a complete weather node.

| Device | Coverage | Why it remains relevant | Evidence |
|---|---|---|---|
| BARANI MeteoWind IoT Pro | Wind only | Autonomous low-power wind node; datasheet states 4 Hz speed/gust, 1 Hz direction, solar charging and multi-month battery autonomy. | [R8] |
| METER ATMOS 22 | Wind only | Very low-power sonic wind sensor; product page states less than 100 microamp running current. Strong supplementary node candidate where only wind is required. | [R25] |
| Calypso ULP STD / ULP Pro | Wind only | Official product information states under 0.25 mA at 1 Hz for RS485 and under 0.15 mA at 1 Hz for UART; output-rate options extend higher, but the ultra-low-power claim is documented at 1 Hz. | [R23] |
| LCJ SONIC-ANEMO-DLG-P / DZP | Wind only | LCJ catalog states approximately 200 microamp at 1 Hz for the integrated-datalogger ULP variant and a self-powered photovoltaic/battery variant for direct pulse/potentiometer integration. | [R24] |


## 7. Tender red lines and mandatory evidence

The tender should require vendors to disclose and, where possible, field-prove the following:

1. **Internal measurement cadence, delivered report cadence, and retained statistics.** Internal high-rate sensing is not enough if the delivered payload is only coarse aggregates. [R8][R9][R13][R21]
2. **Vector versus scalar wind handling, gust algorithm, and spike rejection.** Scalar wind averaging or opaque filtering is not acceptable for a CFD-facing truth layer. [R13][R14][R16][R19]
3. **Temperature/RH measurement chain.** Vendors must state whether the value is directly measured in a shielded volume or corrected by an energy-balance model using wind and solar inputs. [R13][R14][R15]
4. **Rain measurement principle and windy-condition evidence.** Catching gauges, tipping buckets, optical rain, piezoelectric rain, radar rain, and electroacoustic rain are not equivalent and must not be treated as such. [R6][R7][R12][R17]
5. **Solar sensor class and optical design.** Silicon-cell and thermopile pyranometers should not be pooled into one generic “solar” category. [R12][R13][R14][R17]
6. **Full installed bill of materials.** Vendors must declare every external hub, logger, charger, solar panel, cabinet, heater feed, cable, bracket, and add-on sensor required to reach the quoted functionality. [R11][R14][R16][R17][R19][R21]
7. **Site-specific field trial.** Award should be conditional on a side-by-side field trial on the target urban mounting geometry, including at least one better-exposed wind reference and one aspirated T/RH reference. [R1][R2][R3][R4][R5]
8. **Sector response and obstruction metadata.** For wind, the vendor must disclose installation geometry limits and the procurer must retain metadata on height, offsets, boom lengths, obstruction sectors, and maintenance state. [R1][R2][R7]

## 8. Recommended procurement position

For a smart-city network whose wind data materially affects digital-twin or CFD decisions, the most defensible procurement position is:

- use a **modular wind-first architecture** for the primary truth layer,
- allow compact AIOs only as secondary / screened options unless they pass a strong field trial,
- treat full installed complexity, not head-unit appearance, as the real cost driver,
- and separate convenience monitoring from primary truth generation.

Under that logic, this annex recommends:
1. **Primary choice for dense autonomous city deployment:** BARANI modular stack. [R8][R9][R10]
2. **Best pure measurement architecture if AWS-like complexity is acceptable:** Gill modular AWS-like. [R11]
3. **Best compact autonomous AIO:** METER ATMOS 41W, with a clear reflective-city T/RH caveat. [R13][R22]
4. **Best compact solar/rain package among the reviewed AIO-like devices:** Gill GMX551 configured with the supplied Kalyx bucket, but not as an ultra-low-power node. [R12]
5. **Devices that should remain in the comparison but not lead the award:** OTT/Lufft WS family, Campbell ClimaVue 50 G2, Vaisala WXT536, R.M. Young ResponseONE-Pro base, Gill MetConnect One base, and Milesight WTS506. [R14][R16][R17][R19][R21]

## 9. References

- **[R1]** FHWA, Environmental Sensor Station Siting Guide (2005) - siting on light standards/poles and differing exposure needs for atmospheric sensors. URL: https://rosap.ntl.bts.gov/view/dot/64384/dot_64384_DS1.pdf
- **[R2]** Oke, T.R. (WMO/urban guidance), Initial Guidance to Obtain Representative Meteorological Observations at Urban Sites (2006/2007). URL: https://urban-climate.org/wp-content/uploads/2023/10/Oke_2006_IOM-81-UrbanMetObs.pdf
- **[R3]** WMO No. 8, Guide to Instruments and Methods of Observation, 2023 edition - radiation shielding, aspiration, and multiply reflected radiation. URL: https://www.seedmech.com/documents_folder/wmo_no_8.pdf
- **[R4]** Teichmann et al., Sensors (2024), façade-mounted urban temperature measurements with ventilated and non-ventilated shields. URL: https://repositum.tuwien.at/bitstream/20.500.12708/195300/1/Teichmann-2024-Sensors-vor.pdf
- **[R5]** Nitu et al., Atmospheric Measurement Techniques (2021), snow/albedo effects on air-temperature measurement errors. URL: https://amt.copernicus.org/articles/14/749/2021/amt-14-749-2021.pdf
- **[R6]** Chinchella, Cauteruccio & Lanza, Sensors (2025), wind impact on OTT Parsivel 2 non-catching precipitation measurements. URL: https://pmc.ncbi.nlm.nih.gov/articles/PMC12568173/
- **[R7]** NOAA PMEL Ocean Climate Stations Technical Note 5 (2012), Wind Speed Variability of Vaisala WXT520. URL: https://www.pmel.noaa.gov/ocs/pubs-ocs/technotes/OCS_TN5_Wind_Speed_Variability_of_VaisalaWXT520.pdf
- **[R8]** BARANI MeteoWind IoT Pro datasheet (2024). URL: https://static1.squarespace.com/static/597dc443914e6bed5fd30dcc/t/656d99b3f75edf6d56b700d6/1701681727111/MeteoWind%2BIoT%2BPro%2BDataSheet.pdf
- **[R9]** BARANI MeteoHelix IoT Pro datasheet (2024). URL: https://static1.squarespace.com/static/597dc443914e6bed5fd30dcc/t/656d99b8f279294f7ab2db3a/1701681664838/MeteoHelix%2BIoT%2BPro%2BDataSheet.pdf
- **[R10]** BARANI MeteoRain 200 Compact datasheet / product information. URL: https://barani.squarespace.com/s/MeteoRain-200-Compact-DataSheet.pdf
- **[R11]** Gill MetConnect Weather Stations manual. URL: https://hansbuch.dk/wp-content/uploads/2024/09/meteorology-gill-wheaterstations-metconnect-manual_hansbuch.pdf
- **[R12]** Gill MaxiMet manual (Issue 11). URL: https://gillinstruments.com/wp-content/uploads/2025/11/MaxiMet-Manual-Issue11.pdf
- **[R13]** METER ATMOS 41 Gen 2 / ATMOS 41W manual. URL: https://www.ekotechnika.cz/sites/default/files/pdf/20937_atmos41_gen2_manual_web.pdf
- **[R14]** Campbell Scientific ClimaVue 50 G2 manual. URL: https://s.campbellsci.com/documents/us/manuals/climavue50g2.pdf
- **[R15]** Campbell Scientific technical paper, ClimaVue 50 temperature correction. URL: https://s.campbellsci.com/documents/us/technical-papers/climavue50_temperature_correction.pdf
- **[R16]** Vaisala WXT530 Series User Guide / WXT536 information. URL: https://www.iag.co.at/fileadmin/user_upload/user_guide_weather_transmitter_wxt530.pdf
- **[R17]** OTT/Lufft WS Series compact weather sensor manual / leaflet. URL: https://www.fondriest.com/pdf/lufft_ws_manual.pdf
- **[R18]** Lufft WS800 product page - aspirated radiation shield and low-power/heater positioning. URL: https://www.lufft.com/products/compact-weather-sensors-293/ws800-umb-smart-weather-sensor-1790/
- **[R19]** R.M. Young ResponseONE-Pro (93000) instruction manual. URL: https://www.youngusa.com/wp-content/uploads/2025/07/93000-90A.pdf
- **[R20]** R.M. Young, '11 Advantages & Disadvantages of All-In-One Weather Stations' (manufacturer discussion of architectural trade-offs). URL: https://www.youngusa.com/2024/12/11-advantages-disadvantages-of-all-in-one-weather-stations/
- **[R21]** Milesight WTS506 user guide and datasheet. URL: https://resource.milesight.com/milesight/iot/document/wts506-user-guide-en.pdf
- **[R22]** METER ATMOS 41W product page - wireless solar-powered remote weather station. URL: https://metergroup.com/products/atmos-41-wireless-remote-weather-station/
- **[R23]** Calypso ULP Standard official product page / manual. URL: https://www.calypsoinstruments.com/shop/ultra-low-power-ultrasonic-wind-meter-ulp-standard-6
- **[R24]** LCJ Capteurs terrestrial catalogue (2022). URL: https://lcjcapteurs.com/wp-content/uploads/2022/06/Terrestrial-catalog_LCJ_Capteurs_ANG_062022.pdf
- **[R25]** METER ATMOS 22 product page. URL: https://metergroup.com/products/atmos-22/
