Home › Forums › Monitor My Watershed › Status update on MMW? › Reply To: Status update on MMW?
As @heather mentioned, we released MonitorMW v0.12 yesterday at that time, which has some major under-the-hood improvements to substantially improve reliabi
As @heather mentioned, we released MonitorMW v0.12 yesterday at that time, which has some major under-the-hood improvements to substantially improve reliability, including now being hosted by AWS (zone: us-east-2; Ohio). For details read our v0.12.0: Update to Python 3.8 & Django 2.2; Migrate to AWS release notes on GitHub.
Unfortunately, to migrate to AWS we needed to change the Domain Name System (DNS) records for our URLs, which in turn needed to propagate to all internet name servers. That propagation took a surprisingly long time, especially for two in your regions: Verizon in Brooklyn NY (this could cover Michigan), and Corporate West in San Jose CA (which covers the entire US West). We reissued the DNS change today a little after 1 ET, which did seem to resolve those persistent holdout name servers. However, on top of that, devices and networks have their own DNS caches that can sometimes persist for quite a while.
Fortunately, all your missing data is still showing up in our old database on LimnoTech servers, so we will be able to sync that to new AWS servers in coming days.
We’re hoping that the DNS caches flush on their own in the coming day or two. Meanwhile, we are looking into a couple of options in case that doesn’t happen:
- Temporarily shut down our old production server, to try to force devices to clear their caches.
- We’ll likely do this late Thu or early Fri of this week, for about 20-30 minutes, during which time data will be unfortunately lost.
- Ask you all to power cycle your devices, early next week, if data is still flowing to our old servers.
- Set up forwarding from our old servers to the new AWS servers, to get data instantaneously logging in the correct database.
- This is not a long-term solution because it includes a potential fail point that we’ve been trying to move away from for a while!
@mbarney, the behavior you saw with your new station makes perfect sense given the issues we’re seeing.
@neilh20, thanks for your patience on your suggestions for reliable data delivery approaches. Now that we’ve migrated to AWS and upgraded the software stack, we’re poised to finally start carefully considering your suggestions.