AICoE / integration-demo-summit-2022

Summit 2022 OCTO Keynote
GNU General Public License v3.0
2 stars 8 forks source link

ML @ THE EDGE!

In this demo we will walk through what it takes to manage a machine learning application's lifecycle on an edge device using a few projects developed by Red Hat's Emerging Technologies group. Specifically, we will use Open Data Hub, Microshift and Meteor along with a couple other open source projects running on the Operate First community cloud.

The goal of this project is to demonstrate the development, training and deployment of an intelligent application onto an edge device, an autonomous RV car, via the Operate First community cloud.

The Approach

For this demo we are going to target the autonomous RV car project DonkeyCar.

dokey car

https://www.donkeycar.com/

The DonkeyCar project is, "An opensource DIY self driving platform for small scale cars. RC CAR + Raspberry Pi + Python (tornado, keras, tensorflow, opencv, ....)". This project has an active community, as well as open source code, data, and ml models that make it simple to develop this demo in an open source and evergreen fashion. Furthermore, the DonkeyCar project has also built a simulator, so anyone can try out the vast majority of this demo without the need for a physical car!

Prerequisites

The infrastructure required for this demo has been (mostly) set up on the Operate First Community Cloud:

The Development Environment

The first thing we do is spin up a data science development environment. This can be easily done by going to https://shower.meteor.zone/ and using the URL for this repo to build a jupyter lab image with all the development requirements installed based on our Pipfile.lock.

meteor

Once the build is complete, we can then go ahead and spawn our custom notebook image in the Operate First Community Cloud Jupyter Hub Instance here. The image should have the same name as the meteor.

jupyter_spawner

Train A Model

Now that we've spawned a Jupyter Lab environment, we're ready to start training a model for a self driving car! From this point, we can start to download some datasets, train models, and experiment with different machine learning model architectures!

Below you can find two example notebooks we've used to download some publicly available data and train a simple model:

Deploy the Model

Now that we have a trained self driving car model ready to go, let's deploy it to our car (or digital twin)!

This is done by simply making a Pull Request to our GitHub repository, getting our new model merged into the upstream repository, and then making a new tag release on the repo. The tag release will kick off the automated build pipeline through Tekton based on our Digital Twin Dockerfile and RC Car Dockerfile, creating two new images, each with our new model and push them to quay.io along with signed signatures.

$ git tag <your release tag>

tekton

Once the images are built and pushed to quay, ArgoCD takes over and deploys the new images on their respective devices, the cosgined validator validates the signed signatures, ensure the images are the valid images and approves the deployment to microshift running on a RaspberryPi controlling the physical RC car and microshift running on a NUC controlling the digital twin.

Sim-Car

And with that we have successfully developed and deployed a self-driving RC car model to the edge with the help of a couple projects developed by Red Hat's Emerging Technologies group!