amazon-archives / aws-robomaker-sample-application-persondetection

Use AWS RoboMaker and demonstrate the use of Amazon Rekognition to recognize people's faces and Amazon Polly to synthesize speech.
MIT No Attribution
25 stars 18 forks source link

Improve documentation for running on physical robot #25

Closed juanrh closed 5 years ago

juanrh commented 5 years ago

Issue #, if available:

22

Description of changes: This PR tries to improve the documentation for running on a physical robot, by being more explicit about how to manually use a colcon bundle to run it on a robot. It also fixes the launch file for physical robots, to use correct parameters for the node h264_video_encoder when running on a robot instead of simulation, as the camera is different in that case.

Testing This has been tested on a turtlebot 3 burger running Ubuntu Mate 16.04.2 (Xenial) and ROS Kinetic

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

juanrh commented 5 years ago

@samuelgundry I understand that deploy_person_detection.launch is not used at all during the simulation, but I'd like to double check with you. Thanks

samuelgundry commented 5 years ago

@juanrh Correct. deploy_persondetection.launch is NOT used by simulation. The convention is prefix "deploy" launch files for deploying to physical robots. They should configure non-simulation nodes and parameters and include the corresponding simulation launch file to keep it as close as possible to the simulation environment.