This workshop let you use AWS Greengrass with the Nvidia Jetson TX2 to run ML models prepared with Amazon SageMaker.
This workshop requires the following:
For this workshop, your device will require preparation, and this can be done in one of two ways:
If you are using the NVidia Jetson TX2 board, you can use our automated config tool. This script will automatically provision the necessary libraries for running all the labs in this workshop.
If you are not using the NVidia Jetson TX2 device for this workshop, the labs should also work on most other devices (tested on MacBook and RaspberryPi 3) assuming the following dependencies have been met:
- Python 2.7
- OpenCV
- numpy (pip)
- face_recognition (via pip) (for *Labs 2 & 3*)
- mxnet (for *Lab 4*)
This workshop is composed of the following Labs:
DEVICE=PI
for the Lambda function to enable low performance mode and the PiCamera (it should also work for other low power devices with USB cameras)mplayer
from a remote computer and view the video stream within the device, for example:
ssh DEVICE cat /tmp/results.mjpeg | mplayer - -demuxer lavf -lavfdopts format=mjpeg:probesize=32
mplayer
on local device's framebuffer to view the lambda's output
ssh DEVICE DISPLAY=0:0 mplayer /tmp/results.mjpeg -vo fbdev -demuxer lavf -lavfdopts format=mjpeg:probesize=32 -fs -zoom -xy 1280