Welcome to EnviroDIY, a community for do-it-yourself environmental science and monitoring. EnviroDIY is part of WikiWatershed, an initiative of Stroud Water Research Center designed to help people advance knowledge and stewardship of fresh water.
New to EnviroDIY? Start here

Mayfly sketch compiles, seems to upload, but doesn’t

Home Forums Mayfly Data Logger Mayfly sketch compiles, seems to upload, but doesn’t

Viewing 20 reply threads
  • Author
    Posts
    • #18002
      Sedhead
      Participant

        I am using VSCode and Platformio to upload sketches to my Mayfly boards. The sketches compiled and uploaded fine two weeks ago.  Today, when I went to deploy the sensors, the code compiled fine and it even looked like it uploaded fine, except that the upload was way too fast. Usually there is a bit of a lag as the code is being written to the board and that did not happen. The message after the upload said it was ‘successful’ but no sketch got uploaded to the Mayfly. Here is a list of things that I tried in no particular order:

        • I double-checked my USB port and it was good to go.
        • I made sure I was working with the correct platformio.ini and that it was in the right place in the folder set up.
        • I uploaded simple Arduino code (‘blink’) and it uploaded and ran fine.
        • I tested it on both a v. 0.5b and v. 1.1 board and it wouldn’t upload on either.
        • I changed my USB cables to make sure I was using our top quality ones.
        • I ran “clean” on platformio and even made sure that I had no duplicate libraries in my .pio libdeps folder.
        • I switched the mayflys on and off.

        I’m happy to share my code with y’all, but this doesn’t seem to be related to code. Wondering if VSCode or Platformio went through an update that has tweaked things. Any help would be greatly appreciated.


        @srgdamiano
        or @shicks, y’all seem to be my go to folks for help, so I’m @-ing you here.

      • #18007
        neilh20
        Participant

          @tahicksonstthomas-edu – just wondering if you tried rebooting, or tried it again.

          VSC and also Platfromio regularly do updates – and sometimes just exiting and starting VSC again seems to work for me.

          More recently for some reason two comport have appeared with a BlueTooth access.

          You can also check what the USB enumerates as a COM port and then set it in the platformio.ini
          <div>
          <div>[env:mayfly]</div>
          <div>upload_port = <COM#></div>
          </div>
           

          • #18008
            Sedhead
            Participant

              Thanks for the reply, Neil. I tried using “upload_port” in platformio.ini and there was no change. Also restarted VSC a couple of times. Tried connecting with the Mayfly off and then on. The weird thing is that I can get simple code to load and work. But this Mayfly code, which is significantly more complex and lengthy, will compile fine, worked in the past, but now does not. As I said in my original post, the output looks like it has uploaded, but it goes way too fast and the sketch clearly isn’t loaded because I get no output on the serial monitor.

              It seems like there are multiple steps in the upload process and I’m wondering if the order makes a difference?

              • Mayfly off or on when you connect it to the USB port (i.e. when do you turn the Mayfly on?)
              • VSC open or closed when you connect Mayfly to the port (i.e. when do you start up VSC?)

              I’m flummoxed.

          • #18009
            neilh20
            Participant

              I wonder what is the size of the code ?

              From the bottom of the PIO  when I build I have

              RAM:   [=====     ]  52.2% (used 8551 bytes from 16384 bytes)

              Flash: [========= ]  87.4% (used 113628 bytes from 130048 bytes)

            • #18010
              Sedhead
              Participant

                Well, that’s interesting. After running a compile check, I get:

                RAM: [ ] 0.1% (used 9 bytes from 16384 bytes)
                Flash: [ ] 0.4% (used 510 bytes from 130048 bytes)

                And when I do the same for “Blink” I get:

                RAM: [ ] 1.6% (used 266 bytes from 16384 bytes)
                Flash: [ ] 1.7% (used 2252 bytes from 130048 bytes)

                So, for some reason, even though it looks like it has compiled correctly and uploaded correctly, nothing is actually getting uploaded. Would the code help here? Or the platformio.ini file? Happy to share both.

              • #18012
                neilh20
                Participant

                  @tahicksonstthomas-edu  – like as you say thats interesting, all of investigations start with that.

                  Probably post the code – though it looks like it is structured to not compile –  eyeball the code flow from first call to setup() and subsequent calls to main() and is your section of the code isolated.

                  SC typically shows colour coded syntax – though .ino as a code anomaly require special enabling. IMHO the easy way is to change the .ino to .cpp

                • #18013
                  Sedhead
                  Participant

                    Before I go digging through the code, I’m wondering why code would compile and run fine one week, then be “structured to not compile” the next. Do you have any idea why this might happen?

                    Here’s the full sketch, based off of Menu_a_la_carte:

                     

                  • #18014
                    neilh20
                    Participant

                      <quote>”Before I go digging through the code, I’m wondering why code would compile and run fine one week, then be “structured to not compile” the next. Do you have any idea why this might happen?” </quote>

                      So a perspective – its your code, and its your environment – so i have no idea, just throwing out a suggestion based on the data of compilation size.   The whole computer is built on “sand” and layers of software, and many items can go wrong. So welcome to the wonderful world of the computer.

                      So I put my code under git/source control/(or plain zip my source tree) , just for when these problems occur. Basic working practice.   Believe me I’ve had the weird-ist problems that come up – and then I can see the change that’s been made and why things have changed. From the snapshot I can recreate the known working program into another director – and hope it works there.

                      Eyeballing your code, I’m afraid nothing jumps out to me.  It could be in the platformio.ini .  It would seem the clue is ~ your program size reflects the nature of the program that is being downloaded.

                      Beats me. Perhaps duplicate your blinkme, then copy and paste the above program into the file and see the size of program it produces.

                    • #18015
                      Sedhead
                      Participant

                        I really want to thank you for giving me help and feedback on this. Although I can’t, for the life of me, understand what’s going on, at least someone is throwing out ideas. My code is under git/source control. I cannot see any changes that I’ve made, looking over my Github history, that could explain this. I did branch off of main to work on some other bells and whistles, but never got around to them. It just stopped compiling and loading properly I tried your cut and paste suggestion, but got the same results. I guess I’ll just keep bashing my head against the wall…

                      • #18016
                        neilh20
                        Participant

                          Sometime “stuff happens” and thats why virtual machines where invented.

                          If you post your git link  – and the project is set up right, I should be able to clone, and then open it in VSC/PIO and see if it builds.

                        • #18018
                          Sedhead
                          Participant

                            Here’s a link to a new public repo: https://github.com/tahickson/UST_Earth_Envt_Society

                            The platformio.ini file should have all you need to know in it. Should be ready to go. Thanks for giving this a try!

                          • #18019
                            neilh20
                            Participant

                              great  makes it easy to try it –

                              I cloned locally, opened the folder in VSC.

                              It pulled in the libs, and then I pressed the pio build button.

                              Advanced Memory Usage is available via “PlatformIO Home > Project Inspect”
                              RAM: [=== ] 34.9% (used 5724 bytes from 16384 bytes)
                              Flash: [======= ] 65.5% (used 85196 bytes from 130048 bytes)
                              Building .pio\build\mayfly\firmware.hex

                              I’m just finished some other testing so my Mayfly was open, and I downloaded it

                              avrdude: 85196 bytes of flash written

                              Very occasionally I’ve had problems with pio getting out of sync or something.  Perhaps you want to uninstall VSC. Then from the platformio page reinstall.

                              https://platformio.org/install/ide?install=vscode

                               

                            • #18020
                              Sedhead
                              Participant

                                That definitely sounds more like what should be happening! Thanks so much for taking the time to do this, Neil. I will try to fix things on Monday when I’m back in my office and next to my sensors.

                                Just a funny addendum. I just created a ChapGPT account so that I could stay abreast of it in my teaching. Here is the text of my first chat:

                                Why won’t my sketch upload to my mayfly datalogger?
                                ChatGPT

                                Hey Tom! It sounds like you’re having trouble uploading your sketch to your Mayfly datalogger. There could be a few reasons for this. First, make sure you have the correct USB drivers installed for the Mayfly and that your computer recognizes it. Also, double-check your connections and ensure the datalogger is powered on.

                                Sometimes, issues arise due to conflicts in the sketch or with other software running on your computer. Try restarting both your computer and the datalogger, and then attempt the upload again. If the problem persists, you might want to check the Arduino IDE for any error messages that could provide more insight into what’s going wrong.

                                If you’re using any additional libraries in your sketch, ensure they are properly installed and compatible with your Mayfly version. Keep an eye out for any specific instructions or troubleshooting steps provided by the Mayfly community or manufacturer.

                                Remember, technology can be a bit finicky sometimes, but with a bit of troubleshooting, you should be able to get your sketch uploaded successfully. If you’re still having issues, don’t hesitate to reach out to fellow Mayfly users or the manufacturer’s support for more personalized assistance. Good luck, and I hope you get your datalogger up and running soon!

                                As for your question about my data sources, I’ve been trained on a diverse range of texts from the internet, books, articles, and other sources to provide information and generate responses. My responses are based on patterns in the data I’ve been exposed to, but I don’t have real-time access to the internet or specific external databases.

                              • #18021
                                neilh20
                                Participant

                                  Goodluck with tracking it down :), some cyber screw come loose.

                                   

                                  Yes interesting to understand the basis of the technology. ChatGPT sounds coherent!! or as a “stochastic parrot”, have enough people written on the problem for it to be trained to sound coherent :).

                                  A friend said they found it helps debug xls macros!!

                                  Like handling a “knife”, we might need to figure out how to use the “cutting edge”. Here is an article on the roots of the technology

                                  Technology

                                  How does ChatGPT work and do AI-powered chatbots “think” like us?
                                  The large language models behind the new chatbots are trained to predict which words are most likely to appear together – but “emergent abilities” suggest they might be doing more than that

                                  25 July 2023 (newScientist)

                                  By Edd Gent

                                  2R59MEN ChatGPT, chatbots and AI
                                  Illustronaut/Alamy

                                  The current whirlwind of interest in artificial intelligence is largely down to the sudden arrival of a new generation of AI-powered chatbots capable of startlingly human-like text-based conversations. The big change came last year, when OpenAI released ChatGPT. Overnight, millions gained access to an AI producing responses that are so uncannily fluent that it has been hard not to wonder if this heralds a turning point of some sort.

                                  There has been no shortage of hype. Microsoft researchers given early access to GPT4, the latest version of the system behind ChatGPT, argued that it has already demonstrated “sparks” of the long-sought machine version of human intellectual ability known as artificial general intelligence (AGI). One Google engineer even went so far as to claim that one of the company’s AIs, known as LaMDA, was sentient. The naysayers, meanwhile, insist that these AIs are nowhere near as impressive as they seem.

                                  All of which can make it hard to know quite what you should make of the new AI chatbots. Thankfully, things quickly become clearer when you get to grips with how they work and, with that in mind, the extent to which they “think” like us.

                                  At the heart of all these chatbots is a large language model (LLM) – a statistical model, or a mathematical representation of data, that is designed to make predictions about which words are likely to appear together.

                                  LLMs are created by feeding huge amounts of text to a class of algorithms called deep neural networks, which are loosely inspired by the brain. The models learn complex linguistic patterns by playing a simple game: the algorithm takes a passage of text, randomly masks out some words and then tries to fill in the gaps. They are, in short, trained to predict the next word. And by repeating the process over and over, they can build up sophisticated models of how language works, says Mirella Lapata at the University of Edinburgh, UK.

                                  Recent breakthroughs are largely down to a new type of neural network invented in 2017 called a “transformer”, which can process data far more efficiently than previous approaches. This made it possible to train much larger models on vast tracts of text scraped from the internet. Transformer-based systems are also much better at understanding context, says Lapata. Whereas older versions could only consider a few words either side of the missing one, transformers can process much longer strings of text, meaning they can tease out more complex and subtle linguistic relationships.

                                   

                                  What turns otherwise unwieldy statistical models into smooth-talking chatbots, meanwhile, is humans rating the output of AIs on criteria like helpfulness and fluency. This data is then used to train a separate “preference model” that filters an LLM’s output. Put this together and you get what we have today, namely a text-based, computerised conversational partner often indistinguishable from a human. The fact that this was achieved using a premise as simple as next-word prediction caught a lot of people by surprise, says Tal Linzen at New York University.

                                  But it is important to remember that the way these AIs operate almost certainly isn’t the way human cognitive processes work. “They learn in such a fundamentally different way from people that it makes it very improbable [that] they ‘think’ the same way people do,” says Linzen.

                                  Here, the mistakes chatbots make are instructive. They are prone to confidently trumpeting falsehoods as facts, something often referred to as “hallucination”, because their output is entirely statistical. “It doesn’t do fact-checking,” says Lapata. “It just generates output that is likely or plausible, but not necessarily true.”

                                  This has led some commentators to disparage chatbots as “stochastic parrots” and their output as nothing more than “a blurry JPEG of the web”. The gist of these jibes is that the new LLMs aren’t as impressive as they first appear – that what they do is merely the imperfect memorisation of training data cleverly stitched back together to give the false impression of understanding.

                                  Emergent abilities
                                  And yet there are some indications that LLMs might be doing more than just regurgitating training data, says Raphaël Millière at Columbia University in New York. Recent research suggests that, after training for long enough, models can develop more general rules that give them new skills. “You get this transition from memorisation to the formation of circuits inside the [neural] network that will be implementing certain algorithms or certain rules to solve the tasks,” he says.

                                  This may help to explain why, as LLMs increase in size, they often experience sudden jumps in performance on certain problems, says Millière. This phenomenon has been referred to as “emergence” and has led to speculation about what other unexpected capabilities AI could develop.

                                  It is important not to get carried away. “This term is very seductive,” says Millière. “It evokes things like the models suddenly becoming self-aware, or things like that. That’s not what we’re talking about.”

                                  Even so, Millière thinks there is a “rich middle ground” between the naysayers and hype merchants. While these chatbots are far from replicating human cognition, in some narrow areas, they may not be so different from us. Digging into these similarities could not only advance AI, he says, but also sharpen our understanding of our own cognitive capabilities.

                                  This story is part of a series in which we explore the most pressing questions about artificial intelligence. Read the other articles below
                                  What generative AI really means for the economy, jobs and education | Forget human extinction – these are the real risks posed by AI today | How to use AI to make your life simpler, cheaper and more productive | The biggest scientific challenges that AI is already helping to crack | Can AI ever become conscious and how would we know if that happens?

                                   

                                • #18022
                                  neilh20
                                  Participant

                                    So good to play around with it, I asked bard.google.com a question that doesn’t have a lot written about it ..

                                    “What is the quiescent power consumption of the Bristlemouth hardware”

                                    AND it sounds coherent – however its “hallucinating”  as not a google project. The 10milliwatts its quoting is the active transmission per Mhz.

                                    I’ve got an open question to the designers .. (https://www.bristlemouth.org/  https://bristlemouth.discourse.group/t/what-is-the-expected-power-profile-and-quiescent-current/65)

                                     

                                    Bard Answer:
                                    <p data-sourcepos=”1:1-1:316″>The quiescent power consumption of the Bristlemouth hardware is 10 milliwatts (mW). This means that the hardware consumes very little power when it is not actively processing data. This makes it ideal for applications where power consumption is a critical factor, such as battery-powered devices or embedded systems.</p>
                                    <p data-sourcepos=”3:1-3:270″>The Bristlemouth hardware is a neural network accelerator developed by Google AI. It is designed to be highly energy efficient, while still providing high performance. The quiescent power consumption of 10 mW is achieved through a number of design techniques, including:</p>

                                    <ul data-sourcepos=”5:1-8:0″>
                                    <li data-sourcepos=”5:1-5:29″>Using a low-power processor
                                    <li data-sourcepos=”6:1-6:24″>Using low-power memory
                                    <li data-sourcepos=”7:1-8:0″>Using a power-gating mechanism to turn off components when they are not in use

                                    <p data-sourcepos=”9:1-9:220″>The Bristlemouth hardware is a significant improvement over previous generations of neural network accelerators in terms of power efficiency. This makes it a valuable tool for developing energy-efficient AI applications.</p>

                                  • #18023
                                    Sedhead
                                    Participant

                                      <h1>Partially solved!</h1>
                                      I may have found part of the problem, thanks to you. I had to create a new repo in order to share my code with you. When I open that new repo in VSC from Github desktop it compiles as you’ve described. I was starting to have a hunch that the problem was that I had branched from main to add some additional functionality and I thought that I could just upload from that branch. However, the branched code doesn’t compile or upload as described above. When I revert to main on Github desktop and open the code, it compiles and uploads just fine. You learn something new every day (or week or month, in my case).

                                      So, this worked for one of the sketches. But when I tried this on a second sketch that uses a different Mayfly, I get the old behavior. I’ll keep trying!

                                    • #18024
                                      neilh20
                                      Participant

                                        Hey glad to have helped!!.  Whew.!

                                        Its a challenge sometime to figure out what the foundations are and build off them. I’ve had to do that before, start with a known should be working, and then slowly add to it to find out what came adrift.   I often use Meld Merge to compare the base directories separately from PIO.   There is a gitkraken plugin of PIO to visualize the branches – however I get lost quickly.

                                        So your process of rebuilding from a known source is teaching moment of the challenges of source management.  The next challenge is traceability – and its mostly the school of hard knocks. Sara is doing an amazing job of fixing lib versions for the main release.

                                        You  have the Mayfly version built in nicely to you platformio.ini- EnviroDIY_ModularSensors@=0.34.0

                                        Can you imagine having your code all working, and distributed being used, and then coming back to make a modification – only to find its lost – you can’t trace what your release was.  Several professional level stories of that happening – and hence why Microsoft bought github for gizzilion$

                                        https://meldmerge.org/   though I ‘m currently on an early version 3.18.3 https://download.gnome.org/binaries/win32/meld/ as some lost functionality in later releases

                                      • #18025
                                        Sedhead
                                        Participant

                                          Alright, I have no idea if this will help or not, but I think it’s important that I get this down for completeness.

                                          1. If I open the public repo that I shared above in VSC from Github Desktop, then copy the sketch into it that I need to load, it seems to compile and upload. So, as of right now, I have my sketches loaded onto my boards and they are cranking away.
                                          2. However, I don’t want to share my passwords and such in a public repo. So, I duplicated the public repo above following the guidelines here. When I open that repo and try to compile or upload the very same sketches in the duplicate, private repo, I get the same problem of them not compiling and uploading, as above.

                                          I tried going back in time on Platformio to see if the new version of it was causing this problem, but that’s not it. There seems to be something going on between VSC and Github. Changing the private repo to public doesn’t change the bad behavior.

                                        • #18028
                                          neilh20
                                          Participant

                                            Well you are now into working deployment practice’s, beyond programming,

                                            So how about simplifying  (I think you’re process is getting complicated . IHMO all the guideline haven’t identified what happens when you actually have a successful program to deploy .)

                                            Seems the question is for  a successful working Mayfly logger program ….. and how to scale it   to the field.

                                            …. you have a very good point

                                            ~~ grin ~~

                                             

                                            Seems to me a really SIMPLE  way of solving it is to put the UUIDs in an include file – and then manage the include file separately so they don’t go to github.  Say “ms_cfg_uuids.h”  This works if its you (and any similar programmers/colleagues) are building and deploying to the field and you can agree on labeling and where to keep the ms_cfg_uuids.h

                                            IMHO – this is a beta development scheme. For each Mayfly it still has to be be compiled with each separate ms_cfg_uuuds.h and then downloaded to each Mayfly, and tracked on a Mayfly basis.  IHMO2 this shows a weakness in EnviroDIY working practises , how to test a program to prove its successful.

                                            If you want to deploy a number (10->??) or so loggers all the same,  … what then … Deploying a number of Mayfly loggers becomes a bit more of a production line.

                                            What I do, in my fork, (and its a big step of functionality)  is put the UUIDs in a file ms_cfg.txt and then put it on the uSD. This is ideal for situations where colleagues are not programmers, but can configure a system through a built  .hex and then uniquely configure the system with ms_cfg.txt

                                            The program can be tested, and with the captured .HEX file easily deployed in multiple numbers.

                                            Its also easy to upgrade, as its the uSD that is the key to connecting it to MMW – not the program.

                                            I wrote about it here https://www.envirodiy.org/geographical-scaling-modularsensors/

                                            I also write about other features that I think are missing from the mainline https://github.com/neilh10/ModularSensors

                                            hope that is helpful 🙂

                                          • #18029
                                            Sedhead
                                            Participant

                                              I think I agree with you about the complexity of deploying multiple sensor packages, each with different UUIDs. However, I don’t think that is what’s screwing me up here. As of now, I simply can’t get my sketches to reliably upload to the boards. I honestly don’t think that the compile and upload process is even getting to a place in my code where UUIDs are even called. Something is compiling and uploading, but it’s only a fraction of the size of my code and it does nothing. When I first coded it, no problem. Now, only problems. So my fear is that I will deploy these sensors, then in a couple of weeks I will need to update the code (I need to acquire a couple week’s worth of data to find the max sonar distance) and I won’t be able to do it in place because the code won’t upload. The really frustrating part of this is the fact that no single “fix” has worked. Something works once, then not again.

                                            • #18031
                                              neilh20
                                              Participant

                                                It seems to me its a working practice’s issue, repeatability and traceability- not a programming issue.  From a distance, seems like you should be able to see the code in the directory that you are building – however you do have a unique directory structure, and if opening HicksonDeployments.code-workspace – maybe its mispointed. I open “folders”.

                                                For the working directory that compiles and downloads, my suggestion was by putting the UUIDs in a separate file ms_cfg_uuids.h -you can have the example non real UUIDs  that get pushed to git.

                                                On building your managed target system, you drop in the unqieu ms_cfg_uuds.h into the directory and then compile it.

                                                That way there isn’t any messing around with the actual source code, and whats on a branch.

                                                I would suggest adding a “GEOGRAPHICAL_ID” in your ms_cfg_uuids.h so the UUIDs compiled are visible, traceable and debugable.

                                                On startup, my systems identify parts so I can manage the code – the code source,  the software name, compile and then the target .

                                                So from my debug logs, I’m currently trying to integrate the SIM7080 and was doing it previously with WiFi ESP32-wroom, I can follow what works and doesn’t

                                                [2023-08-12 09:52:20.083] —Boot(0) Sw Build: a\src\tu_xx02.cpp Aug 12 2023 09:50:19 b’neilh20′
                                                [2023-08-12 09:52:20.146] Sw Name: LT500/Modbus TO MMW LTE SIM7080
                                                [2023-08-12 09:52:20.146] ModularSensors version 0.34.1-abd
                                                [2023-08-12 09:52:20.146] TinyGSM Library version 0.11.6-aab

                                                [2023-08-12 09:52:20.225] Board: Assume Mayfly 1.1A

                                                [2023-08-12 09:52:22.360] GEOGRAPHICAL_ID:”230723 TU-RCtest07 simRiverX Monitor”

                                                 

                                                earlier I was working on

                                                [2023-08-04 18:19:16.580] —Boot(0) Sw Build: a\src\tu_xx02.cpp Aug 4 2023 18:14:40 b’neilh20′
                                                [2023-08-04 18:19:16.632] Sw Name: LT500/Modbus TO MMW ESP32-wroom
                                                [2023-08-04 18:19:16.632] ModularSensors version 0.34.1-abd
                                                [2023-08-04 18:19:16.632] TinyGSM Library version 0.11.6-aab

                                                [2023-08-04 18:19:16.719] Board: Assume Mayfly 1.1A

                                                [2023-08-04 18:19:18.847] GEOGRAPHICAL_ID:”230723 TU-RCtest07 simRiverX Monitor”

                                              • #18034
                                                Sedhead
                                                Participant

                                                  I think that the directory structure could be the culprit. The way that I have been able to reliably get this work is the following:

                                                  1. Create a new Platformio project.
                                                  2. Create an identically-named folder in that project in which to put my code.
                                                  3. Create an identically-named .ino file in that folder for the sketch I need to upload.
                                                  4. Use the Compare files functionality in VSC to move all of my old code into those new files.
                                                  5. Compile and upload

                                                  So I’m basically re-making the project in platformio and that seems to work.

                                              Viewing 20 reply threads
                                              • You must be logged in to reply to this topic.