This PR, joint with feeding_web_interface#136, makes multiple exapnsions to ada_feeding to enable users to customize the staging configuration through the web app. These include:
Expanded the service that gets joint states to also get poses of arbitrary child frames in arbitrary parent frames of reference (e.g., to get the end effector post for MoveFromMouth).
Making MoveToMouth ignore face orientation. This effectively maintains the orientation at the staging configuration, which is reasonable because the camera has to be facing the face, and the camera has the same orientation as the fork.
Expanding Start/Stop Servo to also deactivate the cartesian controller, a bug introduced by #175 .
Expanding ada_planning_scene to reject faces and tables that are too far away from the expected position. This is because when customizing the staging configuration, it is possible that the robot arm is above the plate but face detection is toggled on, and sometimes it wrongly detects a face in the pattern of the table (e.g., wood grain).
Changing MoveToMouth and MoveFromMouth to direct cartesian control, ignoring the MoveIt planning scene. Since that changes the speed of motion to/from the mouth, we also lower the speed near the mouth (interpolating between the earlier speed and a slower speed near the mouth).
The PR also slightly pitches the default AbovePlate configuration so the camera is parallel to the table, to improve the angle that the user sees the plate from.
Fix a bug in create_action_servers where when a new namespace is created, the trees aren't updated to default parameters.
Make face detection more robust, by rejecting outliers when fitting a plan, and by using the median instead of the average.
Testing procedure
[x] Pull this branch and feeding_web_interface#136, build, and run the code in real: python3 src/ada_feeding/start.py
[x] Test the robot state service:
[x] Run the service just with joints, verify it still works: ros2 service call /get_robot_state ada_feeding_msgs/srv/GetRobotState "{joint_names: ['j2n6s200_joint_1', 'j2n6s200_joint_2', 'j2n6s200_joint_3', 'j2n6s200_joint_4', 'j2n6s200_joint_5', 'j2n6s200_joint_6'], child_frames: [], parent_frames: []}"
[x] Run the service just with one pose, verify it works: ros2 service call /get_robot_state ada_feeding_msgs/srv/GetRobotState "{joint_names: [], child_frames: ['forkTip'], parent_frames: ['j2n6s200_link_base']}"
[x] Run the service with multiple poses, verify it works: ros2 service call /get_robot_state ada_feeding_msgs/srv/GetRobotState "{joint_names: [], child_frames: ['forkTip', 'j2n6s200_link_6'], parent_frames: ['j2n6s200_link_base', 'j2n6s200_link_base']}"
[x] Run the service with joints and poses, verify it works: ros2 service call /get_robot_state ada_feeding_msgs/srv/GetRobotState "{joint_names: ['j2n6s200_joint_1', 'j2n6s200_joint_2', 'j2n6s200_joint_3', 'j2n6s200_joint_4', 'j2n6s200_joint_5', 'j2n6s200_joint_6'], child_frames: ['forkTip'], parent_frames: ['j2n6s200_link_base']}"
[x] Try it with mismatched child and parent frame lengths: ros2 service call /get_robot_state ada_feeding_msgs/srv/GetRobotState "{joint_names: [], child_frames: ['forkTip'], parent_frames: ['j2n6s200_link_base', 'j2n6s200_link_base']}"
[x] (Note, one way you can verify it works is to have the arm move to a different configuration and verify the service response updates accordingly.)
[x] Test MoveToMouth / MoveFromMouth:
[x] Using the web app with the default preset parameters, eat a whole bite and verify bite transfer goes as expected.
[x] Verify that the goal MoveToMouth receives is, in fact, the goal that the face detection message received by the web app (feeding_web_interface#136 fixed a bug where the web app was sending an empty goal). (NOTE: I added an extra log in check_face_msg to verify this.)
[x] In the web app, customize the staging configuration so the fork is facing the face from a different angle. Then, do bite transfer and verify the fork angle is mostly maintained.
[x] Verify that the transfer speeds seem comfortable.
[x] Verify that face rejection works:
[x] Go to customizing the staging configuration. Teleop the arm looking somewhere away from the wheelchair. Put your face in front of the arm. Verify that ada_planning_scene doesn't move the head to that position (there should also be a corresponding log saying the detected face is rejected).
Before opening a pull request
[x] Format your code using black formatterpython3 -m black .
[x] Run your code through pylint and address all warnings/errors. The only warnings that are acceptable to not address is TODOs that should be addressed in a future PR. From the top-level ada_feeding directory, run: pylint --recursive=y --rcfile=.pylintrc ..
Description
This PR, joint with
feeding_web_interface
#136, makes multiple exapnsions toada_feeding
to enable users to customize the staging configuration through the web app. These include:MoveFromMouth
).ada_planning_scene
to reject faces and tables that are too far away from the expected position. This is because when customizing the staging configuration, it is possible that the robot arm is above the plate but face detection is toggled on, and sometimes it wrongly detects a face in the pattern of the table (e.g., wood grain).MoveToMouth
andMoveFromMouth
to direct cartesian control, ignoring the MoveIt planning scene. Since that changes the speed of motion to/from the mouth, we also lower the speed near the mouth (interpolating between the earlier speed and a slower speed near the mouth).AbovePlate
configuration so the camera is parallel to the table, to improve the angle that the user sees the plate from.create_action_servers
where when a new namespace is created, the trees aren't updated to default parameters.Testing procedure
feeding_web_interface
#136, build, and run the code in real:python3 src/ada_feeding/start.py
ros2 service call /get_robot_state ada_feeding_msgs/srv/GetRobotState "{joint_names: ['j2n6s200_joint_1', 'j2n6s200_joint_2', 'j2n6s200_joint_3', 'j2n6s200_joint_4', 'j2n6s200_joint_5', 'j2n6s200_joint_6'], child_frames: [], parent_frames: []}"
ros2 service call /get_robot_state ada_feeding_msgs/srv/GetRobotState "{joint_names: [], child_frames: ['forkTip'], parent_frames: ['j2n6s200_link_base']}"
ros2 service call /get_robot_state ada_feeding_msgs/srv/GetRobotState "{joint_names: [], child_frames: ['forkTip', 'j2n6s200_link_6'], parent_frames: ['j2n6s200_link_base', 'j2n6s200_link_base']}"
ros2 service call /get_robot_state ada_feeding_msgs/srv/GetRobotState "{joint_names: ['j2n6s200_joint_1', 'j2n6s200_joint_2', 'j2n6s200_joint_3', 'j2n6s200_joint_4', 'j2n6s200_joint_5', 'j2n6s200_joint_6'], child_frames: ['forkTip'], parent_frames: ['j2n6s200_link_base']}"
ros2 service call /get_robot_state ada_feeding_msgs/srv/GetRobotState "{joint_names: [], child_frames: ['forkTip'], parent_frames: ['j2n6s200_link_base', 'j2n6s200_link_base']}"
MoveToMouth
/MoveFromMouth
:MoveToMouth
receives is, in fact, the goal that the face detection message received by the web app (feeding_web_interface
#136 fixed a bug where the web app was sending an empty goal). (NOTE: I added an extra log incheck_face_msg
to verify this.)ada_planning_scene
doesn't move the head to that position (there should also be a corresponding log saying the detected face is rejected).Before opening a pull request
python3 -m black .
ada_feeding
directory, run:pylint --recursive=y --rcfile=.pylintrc .
.Before Merging
Squash & Merge