In most cases, enabling your IoT device with AI capabilities involves sending the data from the device to a server. The machine learning calculations would happen on the server. Then the results sent back to the device for appropriate action.
When data security or network connectivity is a concern this is not an ideal or feasible approach.
With this code pattern, you will learn how to build and deploy machine learning apps that can run offline and directly on the device (in this case a Raspberry Pi). Using Node-RED with TensorFlow.js you can incorporate machine learning into your device is an easy, low-code way.
Node-RED is an open source visual programming tool that offers a browser-based flow editor for wiring together devices, APIs, and services. Built on Node.js, you can extend its features by creating your own nodes or taking advantage of the JavaScript and NPM ecosystem.
TensorFlow.js is an open source JavaScript library to build, train, and run machine learning models in JavaScript environments such as the browser and Node.js.
Combining Node-RED with TensorFlow.js developers and IoT enthusiasts can more easily add machine learning functionality onto their devices.
When you have completed this code pattern, you will understand how to:
Follow these steps to setup and run this code pattern. The steps are described in detail below.
First let's get the code. From the terminal of the system you plan on running Node-RED from, do the following:
Clone the node-red-tensorflowjs
repo:
$ git clone https://github.com/IBM/node-red-tensorflowjs
Move into the directory of the cloned repo:
$ cd node-red-tensorflowjs
Note: For Raspberry Pi users, details on accessing the command line can be found in the remote access documentation if not connecting with a screen and keyboard.
You can install the necessary dependencies by running:
$ npm install
This will install Node-RED along with any necessary custom node packages for running the browser flow
in the local node_modules
folder, and you can move on to starting Node-RED.
Alternatively, if you already have Node-RED installed on your system, you can just install
the dependencies from your Node-RED user directory (~/.node-red
). Run the following block of code, being
sure to change the <full path>
placeholder to the path of the cloned repo:
cd ~/.node-red
npm install <full path>/node-red-contrib-tfjs-object-detection
npm install node-red-contrib-browser-utils node-red-contrib-play-audio node-red-contrib-image-output
Be sure to restart Node-RED if it was already running when installing this way.
Note: If you are using a Raspberry Pi, instructions for installing Node-RED can be found
here. However, if you are using the Raspbian
operating system for the Raspberry Pi, Node-RED comes pre-installed, so you can just install
dependencies from the ~/.node-red
directory.
Node-RED can be started from a terminal by running this command from within the directory of the cloned repository:
$ npm start
Alternatively, if you have Node-RED installed globally with dependencies installed under ~./node-red
,
you can start Node-RED from any directory:
$ node-red
You can stop Node-RED by closing the terminal window or using Ctrl-C
in the terminal.
The Node-RED editor can be accessed from http://localhost:1880
.
However, if Node-RED is on the Raspberry Pi, you can connect to it via http://<Raspberry Pi IP>:1880
.
Once installed the node can be added and used in the flow of your Node-RED application. To import the flows available in this repo:
browser-flow.json
.raspberrypi-flows.json
.The Node-RED flow can be deployed in multiple ways. Follow the option that best fits your use case:
The Raspberry Pi flows use hardware peripherals and Raspberry Pi specific nodes. This assumes you imported
the raspberrypi-flows.json
file.
The following hardware components are needed to fully run this flow:
Additionally, a few custom nodes are needed and can be added through the Palette Manager:
node-red-contrib-image-output
node-red-contrib-camerapi
if using a Raspberry Pi
Camera Modulenode-red-contrib-usbcamera
if using a USB camera.
fswebcam
on the Raspberry Pi by running sudo apt install fswebcam
.The imported flows file contains two flows:
tfjs object detection
node where objects will be detected.
A function node will use simple JavaScript to check if any of the detected classes is a class of interest
(in this case, a person
). If so, a .wav
audio file located on the Pi is played through the connected speaker.Make sure all your hardware is connected, then:
Play Audio File
exec node and change the path in the append section to the path of
a .wav
file of your choosing. Click Done
when finished.Take Photo
inject node.Note: Feel free to change the detected object by editing the code in the isObjectDetected
node.
From the Node-RED editor, do the following:
file inject
node and browse to an image.camera
node and allow the browser to access the webcam.The image will be processed by the tfjs object detection
node and the output will be displayed in the Debug panel.
If the browser supports the Web Audio API, the objects detected will be spoken.
This code pattern is licensed under the Apache License, Version 2. Separate third-party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 and the Apache License, Version 2.