amazon-archives / aws-robomaker-sample-application-persondetection

Use AWS RoboMaker and demonstrate the use of Amazon Rekognition to recognize people's faces and Amazon Polly to synthesize speech.
MIT No Attribution
25 stars 18 forks source link

AWS RoboMaker Sample Application - Navigation and Person Detection

This application demonstrates the use of Amazon Rekognition to recognize people's faces and Amazon Polly to synthesize speech.

RoboMaker sample applications include third-party software licensed under open-source licenses and is provided for demonstration purposes only. Incorporation or use of RoboMaker sample applications in connection with your production workloads or a commercial products or devices may affect your legal rights or obligations under the applicable open-source licenses. Source code information can be found here.

Usage

Requirements

AWS Account Setup

Resources

This sample application uses Amazon Kinesis and Amazon Rekognition to recognize faces. See here for details. The easiest method is to reuse the resources create by the AWS RoboMaker sample application. To do that follow these steps:

  1. Sign in to the AWS RoboMaker console at https://us-west-2.console.aws.amazon.com/robomaker/home/.
  2. In the AWS RoboMaker console, expand Resources on the left and then select Sample applications.
  3. In the Try RoboMaker sample applications page, select Navigation and person recognition and then select Launch.
  4. There will be a banner at the top of the page listing the number of resources that have been created. When all of the resources have been created it will open the simulation job detail page. At this point cancel the simulation job. This will not delete any of the resources.
  5. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation/ and find the stack with AWSRoboMakerPersonDetection in the name.
  6. Expand the Parameters tab and find the LaunchId Key.
  7. In the runtime environment for the robot application set the LAUNCH_ID environmental variable to the LaunchId value from CloudFormation. In AWS RoboMaker the environmental variable can be set as an option when running a simulation job. In a Linux environment this can be accomplisehd with export LAUNCH_ID=<value>
  8. When the robot application is run the launch files will use the LAUNCH_ID to connect to the correct Kinesis data stream and Kinesis video stream.

When setting up resources using the above method new faces can be added to Rekognition with the following steps:

  1. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation/ and find the stack with AWSRoboMakerPersonDetection in the name.
  2. Expand the Resources tab and find the Physical ID of the Amazon S3 bucket with the Logical ID BundlesBucket.
  3. Copy the image into the photos folder of the bucket with the following command
    aws s3 cp path/to/image s3://<physical_id>/photos/persons_name.png

If you'd like more detailed control over the resources follow the instructions here: https://docs.aws.amazon.com/rekognition/latest/dg/recognize-faces-in-a-video-stream.html. Make sure that the names of the Kinesis video stream and data stream match those in src/person_detection_robot/launch/kinesis.launch.

Permissions

To publish to Amazon CloudWatch Metrics and Logs the IAM user configured in the environment in which the node is running will need the following permissions:

   logs:PutLogEvents
   logs:DescribeLogStreams
   logs:CreateLogStream
   logs:CreateLogGroup

For more information about the CloudWatch Metrics node see here: https://github.com/aws-robotics/cloudwatchmetrics-ros1

For more information about the CloudWatch Logs node see here: https://github.com/aws-robotics/cloudwatchlogs-ros1

To use the Kinesis node you will need an IAM user with the following permissions:

  kinesisvideo:CreateStream
  kinesisvideo:TagStream
  kinesisvideo:DescribeStream
  kinesisvideo:GetDataEndpoint
  kinesisvideo:PutMedia
  kinesis:ListShards
  kinesis:GetShardIterator
  kinesis:GetRecords

For more information on the Amazon Kinesis node see here: https://github.com/aws-robotics/kinesisvideo-ros1

To use the Polly node you will need an IAM user with the following permissions:

  polly:SynthesizeSpeech

For more information on the Amazon Polly node see here https://github.com/aws-robotics/tts-ros1

Information about attaching permissions to an IAM user can be found here: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_change-permissions.html

Information about configuring AWS credentials can be found here: https://docs-aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-multiple-profiles

Build

Install requirements

Follow links above for instructions on installing required software.

Pre-build commands

sudo apt-get update
rosdep update

Robot

cd robot_ws
rosws update
rosdep install --from-paths src --ignore-src -r -y
colcon build

Simulation

cd simulation_ws
rosws update
rosdep install --from-paths src --ignore-src -r -y
colcon build

Run

Launch the application with the following commands:

Using this sample with RoboMaker

You first need to install colcon-ros-bundle. Python 3.5 or above is required.

pip3 install colcon-ros-bundle

After colcon-ros-bundle is installed you need to build your robot or simulation, then you can bundle with:

# Bundling Robot Application
cd robot_ws
source install/local_setup.sh
colcon bundle

# Bundling Simulation Application
cd simulation_ws
source install/local_setup.sh
colcon bundle

This produces the artifacts robot_ws/bundle/output.tar and simulation_ws/bundle/output.tar respectively.

You'll need to upload these to an s3 bucket, then you can use these files to create a robot application, create a simulation application, and create a simulation job in RoboMaker.

Architecture Details

ArchitectureDiagram

AWS ROS Packages used by this Sample

License

MIT-0 - See LICENSE for further information

How to Contribute

Create issues and pull requests against this Repository on Github