emmanuel-bv / iotedge-iva-nano

Quickstart to deploy an Intelligent Video Analytics application running at the edge over multiple cameras and with custom AI models.
45 stars 19 forks source link

Azure IoT Edge Workshop: Visual Anomaly Detection over multiple cameras with NVIDIA Jetson Nano devices

In this workshop, you'll discover how to build a solution that can process several real-time video streams with an AI model on a $100 device, how to build your own AI model to detect custom anomalies and finally how to operate it remotely.

We'll put ourselves in the shoes of a soda can manufacturer who wants to improve the efficienty of its plant. An improvement that he'd like to make is to be able to detect soda cans that fell down on his production lines, monitor his production lines from home and be alerted when this happen. He has 3 production lines, all moving at a fairly quick speed.

To satisfy the real-time, multiple cameras, custom AI model requirements, we'll build this solution using NVIDIA Deepstream on a NVIDIA Jetson Nano device. We'll build our own AI model with Azure Custom Vision. We'll deploy and connect it to the Cloud with Azure IoT Edge and Azure IoT Central. Azure IoT Central will be used to do the monitoring and alerting.

Check out this video recording to help you go through all the steps and concepts used in this workshop: Workshop Recording

Prerequisites

Jetson Nano

head -n 1 /etc/nv_tegra_release

Jupyter Notebook

The next sections walks you step-by-step to deploy Deepstream on an IoT Edge device, update its configuration via a pre-built IoT Central application and build a custom AI model with Custom Vision. It explains concepts along the way.

Understanding the solution running at the Edge

The soda can manufucturer already asked a partner to build a first protoype solution that can analyze video streams with a given AI model and connect it to the cloud. The solution built by this partner is composed of two main blocks:

  1. NVIDIA DeepStream, which does all the video processing

    DeepStream is an highly-optimized video processing pipeline, capable of running one ore more deep neural networks, e.g. AI models. It provides outstanding performances thanks to several techniques that we'll discover below. It is a must-have tool whenever you have complex video analytics requirements like real-time object detection or when employing cascading AI models.

DeepStream runs as a container, which can be deployed and managed by IoT Edge. It also is integrated with IoT Edge to send all its outputs to IoT Edge runtime.

The DeepStream application we are using was easy to build since we use the out-of the box once provided by NVIDIA in the Azure Marketplace here. We're using this module as-is and are only configuring it from the IoT Central bridge module.

Deepsteam in Azure Marketplace

  1. A bridge to IoT Central, which transforms telemetry sent by DeepStream into a format understood by IoT Central and configures DeepStream remotely.

It formats all telemetry, properties, and commands using IoT Plug and Play aka PnP, which is the declarative language used by IoT Central to understand how to communicate with a device.

Understanding NVIDIA DeepStream

Deesptream is a SDK based on GStreamer, an open source, battle-tested platform to create video pipelines. It is very modular with its concepts of plugins. Each plugins have sinks and sources. NVIDIA provides several plugins as part of Deepstream which are optimized to leverage NVIDIA's GPUs or other NVIDIA hardware like dedicated encoding/decoding chips. How these plugins are connected with each others is defined in the application's configuration file.

Here is an example of what an end-to-end DeepStream pipeline looks like:

NVIDIA Deepstream Application Architecture.

You can learn more about its architecture in NVIDIA's official documentation.

To better understand how NVIDIA DeepStream works, let's have a look at its default configuration file copied here in this repo (called Demo Mode in IoT Central UI later on).

Observe in particular:

Understanding the connection to IoT Central

IoT Edge connects to IoT Central with the regular Module SDK (you can look at the source code here). Telemetry, Properties and Commands that the IoT Edge Central bridge module receives/sends follow IoT Plug and Play aka PnP format, which is enforced in the Cloud by IoT Central. IoT Central enforces them against a Device Capability Model (DCM), which is a file that defines what this IoT Edge device is capable of doing.

Enough documentation! Let's now see the solution built by our partner in action.

Operating the solution with IoT Central app

Let's start by creating a new IoT Central app to remotely control the Jetson Nano.

Create a new IoT Central app

[!WARNING] Steps to create an IoT Central application have changed compared to the recording because the IoT Central team deprecated the copy of an IoT Central application that containers IoT Edge devices. The following steps have thus been revised to create an IoT Central application from scratch and manually add and configure the Jetson Nano device as an IoT Edge device in IoT Central.

We'll start from a new IoT Central application, add an Device Capability Model and an IoT Edge deployment manifest that describe the video analytics solution running on the NVIDIA Jetson Nano and optionally customize our IoT Central application.

Create an IoT Edge device from your IoT Central app

We'll create a new IoT Edge device in your IoT Central application with the device template created above that will enable the NVIDIA Jetson Nano to connect to IoT Central.

Setting up your device to be used with your IoT Central application

We'll start from a blank Jetson installation (Jetpack v4.3), copy a few files locally that are needed for the application such as video files to simulate RTSP cameras and deepstream configuration files, install IoT Edge and configure it to connect to your IoT Central instance.

  1. On your Jetson Nano create a folder name data at the root:

    sudo mkdir /data
  2. Download and extra setup files in the data directory:

    cd /data
    sudo wget -O setup.tar.bz2 --no-check-certificate "https://onedrive.live.com/download?cid=0C0A4A69A0CDCB4C&resid=0C0A4A69A0CDCB4C%21588625&authkey=ACUlRaKkskctLOA"
    sudo tar -xjvf setup.tar.bz2
  3. Make the folder accessible from a normal user account:

    sudo chmod -R 777 /data
  4. Install IoT Edge (instructions copied from here for convenience):

    curl https://packages.microsoft.com/config/ubuntu/18.04/multiarch/prod.list > ./microsoft-prod.list
    sudo cp ./microsoft-prod.list /etc/apt/sources.list.d/
    curl https://packages.microsoft.com/keys/microsoft.asc | gpg --dearmor > microsoft.gpg
    sudo cp ./microsoft.gpg /etc/apt/trusted.gpg.d/
    sudo apt-get update
    sudo apt-get install iotedge
  5. Connect your device to your IoT Central application by editing IoT Edge configuration file:

    • Use your favorite text editor to edit IoT Edge configuration file:
    sudo nano /etc/iotedge/config.yaml
    • Comment out the "Manual provisioning configuration" section so it looks like this:
    # Manual provisioning configuration
    #provisioning:
    #  source: "manual"
    #  device_connection_string: ""
    • Uncomment the "DPS symmetric key provisioning configuration" (not the TPM section but the symmetric key one) and add your IoT Central app's scope id, registration_id which is your device Id and its primary symmetric key:

    :warning: Beware of spaces since YAML is space sensitive. In YAML exactly 2 spaces = 1 identation and make sure to not have any trailing spaces.

    # DPS symmetric key provisioning configuration
    provisioning:
        source: "dps"
        global_endpoint: "https://global.azure-devices-provisioning.net"
        scope_id: "<ID Scope>"
        attestation:
          method: "symmetric_key"
          registration_id: "<Device ID>"
          symmetric_key: "<Primary Key>"
    • Save and exit your editor (Ctrl+O, Ctrl+X)

    • Now Restart the Azure IoT Edge runtime with the following command:

    sudo systemctl restart iotedge
    • And let's verify that the connection to the cloud has been correctly established. If it isn't the case, please check your IoT Edge config file.
    sudo systemctl status iotedge

As you can guess from this last step, behind the scenes IoT Central is actually using Azure Device Provisioning Service to provision devices at scale.

With the IoT Edge device connected to the cloud, it can now report back its IP address to IoT Central. Let's verify that it is the case:

  1. Go to your IoT Central application
  2. Go to Devices tab from the left navigation
  3. Click on your device
  4. Click on its Device tab
  5. Verify that the RTSP Video URL starts with the IP address of your device

After a minute or so, IoT Edge should have had enough time to download all the containers from the Cloud per IoT Central's instructions and DeepStream should have had enough time to start the default video pipeline, called Demo mode in IoT Central UI. Let's see how it looks like:

  1. In IoT Central, copy the RTSP Video URL from the Device tab
  2. Open VLC and go to Media > Open Network Stream and paste the RTSP Video URL copied above as the network URL and click Play
  3. In IoT Central, go to to the Dashboard tab of your device (e.g. from the left nav: Devices > your-device > Dashboard)
  4. Verify that active telemetry is being sent by the device to IoT Central. In particular, the number of primary detections which are set to car by default should map to the objects detected by the 4 cameras.

At this point, you should see 4 real-time video streams being processed to detect cars and people with a Resnet 10 AI model.

4 video streams processed real-time

Operating the solution

To demonstrate how to remotely manage this solution, we'll send a command to the device to change its input cameras. We'll use your phone as an RTSP camera as a new input camera.

IoT Central

Changing input cameras

Let's first verify that your phone works as an RTSP camera properly:

Let's just verify that the camera is functional. With VLC:

Let's now update your Jetson Nano to use your phone's camera. In IoT Central:

This sends a command to the device to update its DeepStream configuration file with these new properties and to restart DeepStream. If you were still streaming the output of the DeepStream application, this stream will be taken down as DeepStream will restart.

Let's have a closer look at DeepStream configuration to see what has changed compared to the initial Demo Mode configuration which is copied here. From a terminal connected to your Jetson Nano:

  1. Open up the default configuration file of DeepStream to understand its structure:

    nano /data/misc/storage/DSConfig.txt
  2. Look after the first source and observe how parameteres provided in IoT Central UI got copied here.

Within a minute, DeepStream should restart. You can observe its status in IoT Central via the Modules tab. Once deepstream module is back to Running, copy again the RTSP Video Url field from the Device tab and give it to VLC (Media > Open Network Stream > paste the RTSP Video URL > Play).

You should now detect people from your phone's camera. The count of Person in the dashboard tab of your device in IoT Central should go up. We've just remotely updated the configuration of this intelligent video analytics solution!

Use an AI model to detect custom visual anomalies

We'll use simulated cameras to monitor each of the soda cans production lines and we'll collect images and build a custom AI model to detects cans that are up or down. We'll then deploy this custom AI model to DeepStream via IoT Central. To do a quick Proof Of Concept, we'll use the Custom Vision service, a no-code computer vision AI model builder.

As a pre-requisite, let's create a new Custom Vision project in your subscription:

We then need to collect images to build a custom AI model. In the interest of time, here is a set of images that have already been captured for you that you can upload to Custom Vision. Download it, unzip it and upload all the images into your Custom Vision project.

We then need to label our images:

Labelling in Custom Vision

Once you're done labeling, let's train and export your model:

In the interest of time, you can also use this link to a pre-built Custom Vision model.

Finally, we'll deploy this custom vision model to the Jetson Nano using IoT Central. In IoT Central:

After a few moments, the deepstream module should restart. Once it is in Running state again, look at the output RTSP stream via VLC (Media > Open Network Stream > paste the RTSP Video URL that you got from the IoT Central's Device tab > Play).

We are now visualizing the processing of 3 real time (e.g. 30fps 1080p) video feeds with a custom vision AI models that we built in minutes to detect visual anomalies!

Custom Vision

Creating an alert

To be alerted as soon as a soda can is down, we'll set up an alert to send an email whenever a new soda is detected as being down.

With IoT Central, you can easily define rules and alerts based on the telemetry received by IoT Central. Let's create one whenever a soda can is down.

  1. Go to the Rules tab in the left nav
  2. Click on New
  3. Give it a name like Soda can down!
  4. Select your device template NVIDIA Jetson Nano DCM
  5. Create a Condition with the following attributes:
    • Telemetry = Secondary Detection Count
    • Operator = Is greater than
    • Value = 1 and hit Enter
  6. Create an email Action with the following attributes:
    • Display name = Soda can down
    • To = your email address used to login to your IoT Central application
    • hit Done
  7. Save

In a few seconds, you should be receiving some mails :)

Clean-up

This is the end of the workshop. Because there will be another session that uses the same device and azure account after you, please clean up the resources you've installed to let others start fresh:

Going further

Thank you for going through this workshop! We hope that you enjoyed it and found it valuable.

There are other content that you can try with your Jetson Nano at http://aka.ms/jetson-on-azure!