In this workshop, you'll discover how to build a solution that can process several real-time video streams with an AI model on a $100 device, how to build your own AI model to detect custom anomalies and finally how to operate it remotely.
We'll put ourselves in the shoes of a soda can manufacturer who wants to improve the efficienty of its plant. An improvement that he'd like to make is to be able to detect soda cans that fell down on his production lines, monitor his production lines from home and be alerted when this happen. He has 3 production lines, all moving at a fairly quick speed.
To satisfy the real-time, multiple cameras, custom AI model requirements, we'll build this solution using NVIDIA Deepstream on a NVIDIA Jetson Nano device. We'll build our own AI model with Azure Custom Vision. We'll deploy and connect it to the Cloud with Azure IoT Edge and Azure IoT Central. Azure IoT Central will be used to do the monitoring and alerting.
Check out this video recording to help you go through all the steps and concepts used in this workshop:
Max
power source mode (e.g. 10W).head -n 1 /etc/nv_tegra_release
A developer's machine: You need a developer's machine (Windows, Linux or Mac) to connect to your Jetson Nano device and see its results with a browser and VLC.
A USB cable MicroB to Type A to connect your Jetson Nano to your developer's machine with the USB Device Mode: we'll use the USB Device Mode provided in NVIDIA's course base image. With this mode, you do not need to hook up a monitor directly to your Jetson Nano. Instead, boot your device and wait for 30 seconds then open your favorite browser, go to http://192.168.55.1:8888 and enter the password dlinano
to get access to a command line terminal on your Jetson Nano. You can use this terminal to run instructions on your Jetson Nano (Ctrl+V is a handy shortcut to paste instructions) or your favorite SSH client if you prefer (ssh dlinano@your-nano-ip-address
where password=dlinano
and your nano ip address can be found with the command /sbin/ifconfig eth0 | grep "inet" | head -n 1
)
Connect your Jetson Nano to the internet: Either use an ethernet connection, in which case you can skip this section or if your device supports WiFi (which is not out-of-the-box for standard dev kits) connect it to WiFi with the following commands from the USB Device Mode terminal:
Re-scan available WiFi networks
nmcli device wifi rescan
List available WiFi networks, and find the ssid_name
of your network.
nmcli device wifi list
Connect to a selected WiFi network
nmcli device wifi connect <ssid_name> password <password>
VLC to view RTSP video streams: To visualize the output of the Jetson Nano without HDMI screen (there is only one per table), we'll use VLC from your laptop to view a RTSP video stream of the processed videos. Install VLC if you dont have it yet.
An Azure subscription: You need an Azure subscription to create an Azure IoT Central application.
A phone with IP Camera Lite app: To view & process a live video stream, you can use your phone with the IP Camera Lite app (iOS, Android) as an IP camera.
The next sections walks you step-by-step to deploy Deepstream on an IoT Edge device, update its configuration via a pre-built IoT Central application and build a custom AI model with Custom Vision. It explains concepts along the way.
The soda can manufucturer already asked a partner to build a first protoype solution that can analyze video streams with a given AI model and connect it to the cloud. The solution built by this partner is composed of two main blocks:
NVIDIA DeepStream, which does all the video processing
DeepStream is an highly-optimized video processing pipeline, capable of running one ore more deep neural networks, e.g. AI models. It provides outstanding performances thanks to several techniques that we'll discover below. It is a must-have tool whenever you have complex video analytics requirements like real-time object detection or when employing cascading AI models.
DeepStream runs as a container, which can be deployed and managed by IoT Edge. It also is integrated with IoT Edge to send all its outputs to IoT Edge runtime.
The DeepStream application we are using was easy to build since we use the out-of the box once provided by NVIDIA in the Azure Marketplace here. We're using this module as-is and are only configuring it from the IoT Central bridge module.
It formats all telemetry, properties, and commands using IoT Plug and Play aka PnP, which is the declarative language used by IoT Central to understand how to communicate with a device.
Deesptream is a SDK based on GStreamer, an open source, battle-tested platform to create video pipelines. It is very modular with its concepts of plugins. Each plugins have sinks
and sources
. NVIDIA provides several plugins as part of Deepstream which are optimized to leverage NVIDIA's GPUs or other NVIDIA hardware like dedicated encoding/decoding chips. How these plugins are connected with each others is defined in the application's configuration file.
Here is an example of what an end-to-end DeepStream pipeline looks like:
.
You can learn more about its architecture in NVIDIA's official documentation.
To better understand how NVIDIA DeepStream works, let's have a look at its default configuration file copied here in this repo (called Demo Mode
in IoT Central UI later on).
Observe in particular:
sources
sections: they define where the source videos are coming from. We're using local videos to begin with and will switch to live RTSP streams later on.sink
sections: they define where to output the processed videos and the output messages. We use RTSP to stream a video feed out and all out messages are sent to the Azure IoT Edge runtime.primary-gie
section: it defines which AI model is used to detect objects. It also defines how this AI model is applied. As an example, note the interval
property set to 4
: this means that inferencing is actually executed only once every 5 frames. Bounding boxes are displayed continuously though because a tracking algorithm, which is computationally less expensive than inferencing, takes over in between. The tracking algorithm used is set in the tracking
section. This is the kind of out-of-the-box optimizations provided by DeepStream that enables us to process 240 frames per second on a $100 device. Other notable optimizations are using dedicated encoding/decoding hardware, only loading frames in memory once (zero in-memory copy), pushing the vast majority of the processing to GPUs, batching frames from multiple streams, etc.IoT Edge connects to IoT Central with the regular Module SDK (you can look at the source code here). Telemetry, Properties and Commands that the IoT Edge Central bridge module receives/sends follow IoT Plug and Play aka PnP format, which is enforced in the Cloud by IoT Central. IoT Central enforces them against a Device Capability Model (DCM), which is a file that defines what this IoT Edge device is capable of doing.
Devices
in the left nav of the IoT Central applicationNVIDIA Jetson Nano DCM
device template. In the case of IoT Edge, an IoT Edge deployment manifest is also attached to a DCM version to create a device template. If you want to see the details on how the device template that we use look like, you can look at this Device Capability Model and at this IoT Edge deployment manifest.Enough documentation! Let's now see the solution built by our partner in action.
Let's start by creating a new IoT Central app to remotely control the Jetson Nano.
[!WARNING] Steps to create an IoT Central application have changed compared to the recording because the IoT Central team deprecated the copy of an IoT Central application that containers IoT Edge devices. The following steps have thus been revised to create an IoT Central application from scratch and manually add and configure the Jetson Nano device as an IoT Edge device in IoT Central.
We'll start from a new IoT Central application, add an Device Capability Model and an IoT Edge deployment manifest that describe the video analytics solution running on the NVIDIA Jetson Nano and optionally customize our IoT Central application.
Custom apps
Create
Device templates
Azure IoT Edge
Next: Customize
Skip + Review
Create
NVIDIA Jetson Nano DCM
and hit EnterImport capability model
NVIDIAJetsonNanoDcm.json
from this repo Views
Visualizing the device
Dashboard
Telemetry
section, select Primary Detection Count
and click on Add tile
Settings
button of the Primary Detection Count
tileSettings
button of the Primary Detection Count
, select Count
instead of Average
and click on Update
Telemetry
section, select Secondary Detection Count
and click on Add tile
Settings
button of the Secondary Detection Count
tileSettings
button of the Secondary Detection Count
, select Count
instead of Average
and click on Update
Telemetry
section, select Free Memory
and System Heartbeat
and click on Add tile
Telemetry
section, select Change Video Model
, Device Restart
, Processing Started
, Processing Stopped
and click on Add tile
Telemetry
section, select Pipeline State
and click on Add tile
Save
Views
Visualizing the device
Device
Properties
section, select Device model
, Manufacturer
, Operating system name
, Processor architecture
, Processor manufacturer
, Software version
, Total memory
, Total storage
, RTSP Video Url
and click on Add tile
Save
About
view to give a description of your deviceAdministration > Your Application
and Administration > Customize your application
Replace manifest
Upload
deployment.json
in the config
folder of this repoReplace
Publish
and confirmWe'll create a new IoT Edge device in your IoT Central application with the device template created above that will enable the NVIDIA Jetson Nano to connect to IoT Central.
Devices
tabNVIDIA Jetson Nano DCM
device templateNew
Device ID
and the Device name
fields (let's use the same name for both of these fields in this workshop)Create
Connect
button in the top right cornerID Scope
value, Device ID
value and Primary key
value and save them for later.We'll start from a blank Jetson installation (Jetpack v4.3), copy a few files locally that are needed for the application such as video files to simulate RTSP cameras and deepstream configuration files, install IoT Edge and configure it to connect to your IoT Central instance.
On your Jetson Nano create a folder name data
at the root:
sudo mkdir /data
Download and extra setup files in the data
directory:
cd /data
sudo wget -O setup.tar.bz2 --no-check-certificate "https://onedrive.live.com/download?cid=0C0A4A69A0CDCB4C&resid=0C0A4A69A0CDCB4C%21588625&authkey=ACUlRaKkskctLOA"
sudo tar -xjvf setup.tar.bz2
Make the folder accessible from a normal user account:
sudo chmod -R 777 /data
Install IoT Edge (instructions copied from here for convenience):
curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
sudo apt-get update
sudo apt-get install iotedge
Connect your device to your IoT Central application by editing IoT Edge configuration file:
sudo nano /etc/iotedge/config.yaml
# Manual provisioning configuration
#provisioning:
# source: "manual"
# device_connection_string: ""
:warning: Beware of spaces since YAML is space sensitive. In YAML exactly 2 spaces = 1 identation and make sure to not have any trailing spaces.
# DPS symmetric key provisioning configuration
provisioning:
source: "dps"
global_endpoint: "https://global.azure-devices-provisioning.net"
scope_id: "<ID Scope>"
attestation:
method: "symmetric_key"
registration_id: "<Device ID>"
symmetric_key: "<Primary Key>"
Save and exit your editor (Ctrl+O, Ctrl+X)
Now Restart the Azure IoT Edge runtime with the following command:
sudo systemctl restart iotedge
sudo systemctl status iotedge
As you can guess from this last step, behind the scenes IoT Central is actually using Azure Device Provisioning Service to provision devices at scale.
With the IoT Edge device connected to the cloud, it can now report back its IP address to IoT Central. Let's verify that it is the case:
Devices
tab from the left navigationDevice
tabRTSP Video URL
starts with the IP address of your deviceAfter a minute or so, IoT Edge should have had enough time to download all the containers from the Cloud per IoT Central's instructions and DeepStream should have had enough time to start the default video pipeline, called Demo mode
in IoT Central UI. Let's see how it looks like:
RTSP Video URL
from the Device
tabMedia
> Open Network Stream
and paste the RTSP Video URL
copied above as the network URL and click Play
Dashboard
tab of your device (e.g. from the left nav: Devices
> your-device
> Dashboard
)car
by default should map to the objects detected by the 4 cameras.At this point, you should see 4 real-time video streams being processed to detect cars and people with a Resnet 10 AI model.
To demonstrate how to remotely manage this solution, we'll send a command to the device to change its input cameras. We'll use your phone as an RTSP camera as a new input camera.
Let's first verify that your phone works as an RTSP camera properly:
Turn on IP Camera Server
Let's just verify that the camera is functional. With VLC:
Media
> Open Network Stream
RTSP Video URL
: rtsp://your-phone-ip-address:8554/live
Play
and verify that phone's camera is properly displaying.Let's now update your Jetson Nano to use your phone's camera. In IoT Central:
Manage
tabDemo Mode
, which uses several hardcoded video files as input of car trafficVideo Stream 1
property:
cameraId
, name your camera, for instance My Phone
videoStreamUrl
, enter the RTSP stream of this camera: rtsp://your-phone-ip-address:8554/live
DeepStream ResNet 10
as the AI model type
.Secondary Detection Class
as person
Save
This sends a command to the device to update its DeepStream configuration file with these new properties and to restart DeepStream. If you were still streaming the output of the DeepStream application, this stream will be taken down as DeepStream will restart.
Let's have a closer look at DeepStream configuration to see what has changed compared to the initial Demo Mode
configuration which is copied here. From a terminal connected to your Jetson Nano:
Open up the default configuration file of DeepStream to understand its structure:
nano /data/misc/storage/DSConfig.txt
Look after the first source
and observe how parameteres provided in IoT Central UI got copied here.
Within a minute, DeepStream should restart. You can observe its status in IoT Central via the Modules
tab. Once deepstream
module is back to Running
, copy again the RTSP Video Url
field from the Device
tab and give it to VLC (Media
> Open Network Stream
> paste the RTSP Video URL
> Play
).
You should now detect people from your phone's camera. The count of Person
in the dashboard
tab of your device in IoT Central should go up. We've just remotely updated the configuration of this intelligent video analytics solution!
We'll use simulated cameras to monitor each of the soda cans production lines and we'll collect images and build a custom AI model to detects cans that are up or down. We'll then deploy this custom AI model to DeepStream via IoT Central. To do a quick Proof Of Concept, we'll use the Custom Vision service, a no-code computer vision AI model builder.
As a pre-requisite, let's create a new Custom Vision project in your subscription:
Soda Cans Down
create new
and select SKU - F0
or (S0)Project Type
= Object Detection
Domains
= General (Compact)
We then need to collect images to build a custom AI model. In the interest of time, here is a set of images that have already been captured for you that you can upload to Custom Vision. Download it, unzip it and upload all the images into your Custom Vision project.
We then need to label our images:
Up
and the ones that are down as Down
Once you're done labeling, let's train and export your model:
Train
Performance
tab, clicking on Export
and choosing ONNX
Download
button and select copy link address
to copy the anonymous location of a zip file of your ccustom modelIn the interest of time, you can also use this link to a pre-built Custom Vision model.
Finally, we'll deploy this custom vision model to the Jetson Nano using IoT Central. In IoT Central:
Manage
tab (beware of the sorting o f the fields)Demo Mode
is uncheckedVideo Stream Input
to the following values:
Video Stream Input 1
> CameraId
= Cam01
Video Stream Input 1
> videoStreamUrl
= file:///data/misc/storage/sampleStreams/cam-cans-00.mp4
Video Stream Input 2
> CameraId
= Cam02
Video Stream Input 2
> videoStreamUrl
= file:///data/misc/storage/sampleStreams/cam-cans-01.mp4
Video Stream Input 3
> CameraId
= Cam03
Video Stream Input 3
> videoStreamUrl
= file:///data/misc/storage/sampleStreams/cam-cans-02.mp4
Custom Vision
as the AI model Type
Custom Vision Model Url
, for instance https://onedrive.live.com/download?0C0A4A69A0CDCB4C&resid=0C0A4A69A0CDCB4C%21587636&authkey=AOCf3YsqcZM_3WM
for the pre-built one.Primary Detection Class
= Up
Secondary Detection Class
= Down
Save
After a few moments, the deepstream
module should restart. Once it is in Running
state again, look at the output RTSP stream via VLC (Media
> Open Network Stream
> paste the RTSP Video URL
that you got from the IoT Central's Device
tab > Play
).
We are now visualizing the processing of 3 real time (e.g. 30fps 1080p) video feeds with a custom vision AI models that we built in minutes to detect visual anomalies!
To be alerted as soon as a soda can is down, we'll set up an alert to send an email whenever a new soda is detected as being down.
With IoT Central, you can easily define rules and alerts based on the telemetry received by IoT Central. Let's create one whenever a soda can is down.
Rules
tab in the left navNew
Soda can down!
NVIDIA Jetson Nano DCM
Secondary Detection Count
Is greater than
1
and hit Enteremail
Action with the following attributes:
Soda can down
Done
Save
In a few seconds, you should be receiving some mails :)
This is the end of the workshop. Because there will be another session that uses the same device and azure account after you, please clean up the resources you've installed to let others start fresh:
Clean up on the Jetson Nano, via a terminal connected to your Jetson Nano:
sudo rm -r /data
sudo apt-get remove --purge -y iotedge
Deleting your IoT Central application, from your browser:
Administration
tab from the left navDelete
the application and confirmDeleting your Custom Vision project, from your browser:
Delete
your Custom Vision project and confirmThank you for going through this workshop! We hope that you enjoyed it and found it valuable.
There are other content that you can try with your Jetson Nano at http://aka.ms/jetson-on-azure!