This is an example application that runs a computer vision model on the Portenta H7 and streams the results over LoRaWAN. The application uses the camera on the Portenta Vision Shield in combination with a machine learning model trained in Edge Impulse to determine when an interesting event happens, then sends this using the LoRa radio on the Portenta Vision Shield back to the network. This demo was built for The Things Conference 2021.
Elephant vs. not elephant
Note: This example was built using a pre-release version of the Portenta H7 libraries, and a preview version of Edge Impulse for the Portenta H7. There are expected issues with exposure, and bugs may arise.
You'll need the following hardware:
Set your application EUI and application key in src/ei_main.cpp.
If you want to use a different channel plan (default: EU868) set it in src/ei_main.cpp as well.
Install the Arduino CLI.
Build this application via:
$ sh arduino-build.sh --build
Flash this application via:
$ sh arduino-build.sh --flash
The elephant model used in the demo is here: Elephant tracker.
Load the Edge Impulse firmware for the Portenta H7: instructions.
Build a new model from scratch in Edge Impulse with the following settings:
To avoid sending messages when just the classification for a single frame changes the output of the algorithm is smoothened. These parameters can be found in ei_run_impulse.cpp (search for the ei_classifier_smooth_init
function).
Then, remove the src/edge-impulse-sdk
, src/model-paramters
and src/tflite-model
folders.
In your Edge Impulse project go to the Deployment page and export as C++ Library.
Add the files in the export to the src
directory and recompile the application.
Issues with the camera? Install the Edge Impulse CLI, and run edge-impulse-daemon
. This will connect the development board to Edge Impulse from where you can see a live feed of the camera.
Alternatively, connect a serial monitor to the development board, press b
to stop inferencing, then run AT+RUNIMPULSEDEBUG
. This will print out the framebuffer after capturing and resizing the image. Write the framebuffer to framebuffer.txt
, and then run:
$ edge-impulse-framebuffer2jpg -f framebuffer.txt -w 64 -h 64 -o framebuffer.jpg