The Monitor My Watershed development team is excited to announce the release of Monitor My Watershed V0.18.
Performance Enhancements
This new version enhances the performance of Monitor My Watershed in several ways.
- CSV data uploads are much faster.
- CSV data downloads are more than four times faster, and users can specify date and time ranges.
- Time Series Visualization in web browsers and data services is significantly faster.
- Multiple time points can be included in a single POST request, allowing data gaps caused by cellular outages of data logger devices to be efficiently backfilled. (This feature is now supported in the Modular Sensors library.)
Bugfixes and Upgrades
Some important bugs have been addressed in this release, including mislabeled timestamps, site-following functions, and Firefox login failure.
Other less obvious advancements include software stack upgrades that improved scalability, reliability, and performance.
More details about Release V0.18 are available on Monitor My Watershed’s GitHub repository.

Subscribe to Advance Monitor My Watershed
These advancements were made possible by subscribers. If you have more than one site on Monitor My Watershed and haven’t subscribed, please do so here.
If you are already a subscriber, thank you!
Need Help With Monitor My Watershed?
- Visit the Monitor My Watershed Help page for links to a quick reference guide and in-depth manual, video tutorials, the GitHub issue tracker, and more.
- If you’ve reviewed the help resources and didn’t find an answer, please post your question on the dedicated Monitor My Watershed forum.
Welcome to EnviroDIY, a community for do-it-yourself environmental science and monitoring. EnviroDIY is part of
neilh20
Thanks for the update and the details. There’s so much work goes into the infrastructure for a remote automated monitoring device—the software releases being part of it. Thanks so much for detailing it.
One of the areas with a fix is a working download of a captured record—a pretty critical part of a logger—and that was failing for me, so I’ve tried it on one of my nodes (LCC45). Fantastic! It now has the new download time window, which defaults to only the last month of readings. I tried two downloads: one with the last month, and the other with a full data set. I defined the full set as starting from 2000/1/1—though it actually starts from 2021/10/23—almost four years of data at 15-minute intervals.
For the 1-month download, it was a snappy 4 seconds—probably faster in practice—practically excellent.
For the longer download, it took 31 seconds—with no indication it was in progress—still good enough.
The more architectural question is: “Is the logger process reliable? Is data collected reliably available for download?” 100.00% in engineering terms is likely impractical, and of course, practically very little in the computer world is 100% reliable. Practically some value between 99.xxx and 99.999% is good enough. I embed a standard sequence number for each unique reading, starting at 0 after each reset, so I can detect any lost data. My fork of ModularSensors has a feature I added to guarantee delivery Quality of Service of “deliver at least once” to the server. That is, the ModularSensors retries until it receives a specific ACK of 201. It records every POST attempt and what ACK was processed. If 201 is received, then it marks that record as delivered. All standard as part of any OSI 7-Layer stack processing, and I’ve tested it, so I’m pretty happy it’s good enough (something like 5 nines or 99.999%).
Once delivered to monitormywatershed, it’s up to the integrity of the server to ensure it’s inserted into the database.
So, for the first two years of data collection, from October 2021 to September 31, 2023, I see lots of lost records, and there were some improvements in monitoring resources. In at least one case, losses were verified across a number of nodes, and it was identified as a database upgrade issue.
From October 2023, I see two major losses of records: one on February 29, 2024, that had 21 records lost—maybe leap-year accounting issues (just kidding)—and the other on August 6 that was 21 records.
As any software professional knows, verification and reliability of software is a specific challenge and skill. Carnegie Mellon University developed the Capability Maturity Model Integration (CMMI) to describe software processes. Of course, for people used to the critical role of repeatable software, they build repeatability into the early process.
The design reliability of the system is unquantified, and IMHO it doesn’t replace a BOOTNET (walk up to the field location and download the data). To do so would require a discussion on the reliability of how to achieve something like 99.999% system reliability—1 data record loss in two years of data collection at 15-minute intervals or 70,080 records—a distinctive challenge. The open-source nature of the ModularSensors device is fantastic for a project that expects the software developer to be closely associated with the field loggers. Practically, there is no easy way to scale reliably to more than a few nodes. I describe a method I implemented in my fork that I use for loggers I’ve supplied: envirodiy.org/geographical-scaling-modularsensors
My conclusion: For these improvements with V0.18, using the mainstream ModularSensors/Mayfly, there is excellent visibility on the status of collecting data when the wireless network is good enough. The downloads verify basic data integrity and equipment status, though it’s not a replacement for the BOOTNET.