languages:
This is a sample showing how to deploy a Custom Vision model to a Raspberry Pi 3 device running Azure IoT Edge. Custom Vision is an image classifier that is trained in the cloud with your own images. IoT Edge gives you the possibility to run this model next to your cameras, where the video data is being generated. You can thus add meaning to your video streams to detect road traffic conditions, estimate wait lines, find parking spots, etc. while keeping your video footage private, lowering your bandwidth costs and even running offline.
This sample can also be deployed on an x64 machine (aka your PC). It has been ported to the newer IoT Edge GA bits.
Check out this video to see this demo in action and understand how it was built:
You can run this solution on either of the following hardware:
Raspberry Pi 3: Set up Azure IoT Edge on a Raspberry Pi 3 (instructions to set up the hardware - use raspbian 9 (stretch) or above) + instructions to install Azure IoT Edge) with a SenseHat and use the arm32v7 tags.
Simulated Azure IoT Edge device (such as a PC): Set up Azure IoT Edge (instructions on Windows, instructions on Linux) and use the amd64 tags. A test x64 deployment manifest is already available. To use it, rename the deployment.template.test-amd64
to deployment.template.json
, then build the IoT Edge solution from this manifest and deploy it to an x64 device.
Check out the animation below to see how a IoT Edge deployment works. You can also get more details through this tutorial to see how a IoT Edge deployment works. You must have the following services set up to use this sample:
You need the following dev tools to do IoT Edge development in general, to make this sample run and edit it:
To learn more about this development environment, check out this tutorial and this video:
This solution is made of 3 modules:
This is how the above three modules communicate between themselves and with the cloud:
From your mac or PC:
.env
file with the values for your container registry and make sure that your docker engine has access to itdeployment.template.json
file and select Build and push IoT Edge Solution
(this can take a while...especially to build open-cv, numpy and pillow...)config/deployment.json
file, select Create Deployment for Single device
and choose your targeted deviceStart Monitoring D2C Message
Note: To stop Device to Cloud (D2C) monitoring, use the Azure IoT Hub: Stop monitoring D2C messages
command from the Command Palette (Ctrl+Shift+P).
From your mac or PC:
.env
file with the values for your container registry and make sure that your docker engine has access to itBuild and push IoT Edge Solution
(this can take a while...especially to build numpy and pillow...) and select the deployment.test-amd64.template.json
manifest file (it includes a test video file to simulate a camera)config/deployment.json
file, select Create Deployment for Single device
and choose your targeted deviceStart Monitoring D2C Message
Note: To stop Device to Cloud (D2C) monitoring, use the Azure IoT Hub: Stop monitoring D2C messages
command from the Command Palette (Ctrl+Shift+P).
Download your own custom vision model from the custom vision service. You just need to replace the ImageClassifierService/app/model.pb
and ImageClassifierService/app/labels.txt
provided by the export feature of Custom Vision.
Explore the various configuration options of the camera module, to score your ai model against a camera feed vs a video clip, to resize your images, to see logs, etc.