fxgsell / GG-Edge-Inference

Using AWS Greengrass with the Nvidia Jetson TX2 to run ML models prepared with Amazon SageMaker.
MIT License
17 stars 7 forks source link

GG-Edge-Inference

This workshop let you use AWS Greengrass with the Nvidia Jetson TX2 to run ML models prepared with Amazon SageMaker.

Workshop Prerequisites

This workshop requires the following:

Device Provisioning

For this workshop, your device will require preparation, and this can be done in one of two ways:

Device: Automated Provisioning

If you are using the NVidia Jetson TX2 board, you can use our automated config tool. This script will automatically provision the necessary libraries for running all the labs in this workshop.

Device: Manual Installation**

If you are not using the NVidia Jetson TX2 device for this workshop, the labs should also work on most other devices (tested on MacBook and RaspberryPi 3) assuming the following dependencies have been met:

- Python 2.7
- OpenCV
- numpy (pip)
- face_recognition (via pip) (for *Labs 2 & 3*)
- mxnet (for *Lab 4*)

Environment configuration

Walk-through

This workshop is composed of the following Labs:

  1. Lab 1: Get started with the configuration of AWS Greengrass and your device.
  2. Lab 2: Run a first model.
  3. Lab 3: Add capability to your Edge model with the Cloud.
  4. Lab 4: Build your own object classification model in SageMaker.
  5. (Optional) Advanced capabilities of the Jetson with Deepstream.

Tips