This repository contains the work of Maya Cakmak and the Human-Centered Robotics Lab at the University of Washington. Please see those sites for citing publications. We abbreviate Programming by Demonstration with PbD.
Currently the PbD system has the following requirements:
Clone this repository and build on both your desktop machine and on the robot:
cd ~/catkin_ws/src
git clone https://github.com/hcrlab/blinky.git
git clone https://github.com/jstnhuang/mongo_msg_db_msgs.git
git clone https://github.com/jstnhuang/mongo_msg_db.git
git clone https://github.com/jstnhuang/rapid.git
git clone https://github.com/PR2/pr2_pbd.git
cd ~/catkin_ws
rosdep install --from-paths src --ignore-src --rosdistro=indigo -y
catkin_make
c1
)robot claim
robot start
source ~/catkin_ws/devel/setup.bash
roslaunch pr2_pbd_interaction pbd_backend.launch
setrobot <ROBOT_NAME>
roslaunch pr2_pbd_interaction pbd_frontend.launch
roslaunch pr2_pbd_interaction pbd_frontend.launch # rviz, rqt, speech
# Optionally open PR2 dashboard in another terminal window
setrobot <ROBOT_NAME>
rosrun rqt_pr2_dashboard rqt_pr2_dashboard # Optional
Plug in a microphone to your computer. Speak into the microphone to issue speech commands to the robot. The voice commands are not currently documented.
roslaunch pr2_pbd_interaction pbd_simulation_stack.launch
If it takes a very long time to save a pose, it is likely because MoveIt is configured to automatically infer the planning scene from sensor data. This makes it very slow to compute IK solutions, which are used to color the gripper markers in RViz. To eliminate this behavior, run MoveIt with a dummy sensor:
<include file="$(find pr2_moveit_config)/launch/move_group.launch" machine="c2">
<arg name="moveit_octomap_sensor_params_file" value="$(find my_package)/config/sensors_dummy.yaml"/>
</include>
Where sensors_dummy.yaml
looks like this:
sensors:
- sensor_plugin: occupancy_map_monitor/PointCloudOctomapUpdater
point_cloud_topic: /head_mount_kinect/depth_registered/pointsdummy
max_range: 5.0
point_subsample: 10
padding_offset: 0.1
padding_scale: 1.0
filtered_cloud_topic: filtered_cloud
rostest pr2_pbd_interaction test_endtoend.test
roscd pr2_pbd_interaction
python test/test_endtoend_realrobot.py
After running the tests, you can view code coverage by opening ~/.ros/htmlcov/index.html
with a web browser. Note that you can also view code coverage for normal execution by passing coverage:=true
when launching pr2_pbd_backend
.
With an account setup at Coveralls, edit the .coveralls.yml
with your repo_token, and track coverage there by running coveralls --data_file ~/.ros/.coverage
.
Before creating a pull request, please do the following things:
pep8 file1 file2 ...
.yapf
and all your C++ code with clang-format
. See the HCR Lab's auto code formatting guide.(Untested) To lint all python files in common directories, run the tests on the desktop, open up code coverage with Google Chrome, and send the results to Coveralls (assuming Coveralls account and .coveralls.yml
correctly setup), we have provided a script:
$ roscd pr2_pbd_interaction; ./scripts/test_and_coverage.sh
Please use the Github issues page for this project.