Plugins for classifying entities based on their attached RGBD measurements.
Depends on:
Check out the following packages in your workspace:
cd <your_catkin_workspace>/src
git clone https://github.com/tue-robotics/ed_perception.git
And compile
cd <your_catkin_workspace>
catkin build
All ED tutorials can be found in the ed_tutorials package: https://github.com/tue-robotics/ed_tutorials
Start the robot and position the robot in front of the objects, and let the head look at the objects. Open a robot console (e.g. for AMIGO):
robot-console
inspect hallway_table
or
amigo-console
Now you can capture an image and save it to disk using:
amigo.ed.save_image(path="/some/path")
This will store the RGBD-image (color + depth) and meta-data (such as the timestamp and 6D pose) to the path you specified. If this path does not yet exist, it will be created. The filename of the file will be the date and timestamp of the image captured.
Before you can annotate the images, a 3D (ED) model of the supporting furniture (e.g. the table on which the objects are positioned). Let's say the name of this model is my-lab/table
.
To streamline the annotation, it is also possible to load the available object types in the GUI and auto-annotate the supporting objects. For this a number of conditions need to be met:
on_top_of
is an area defined in the table
model:like this:
images
|------ on_top_of_table
| | img1.rgbd
| | img1.png
| | ...
|
|------ shelf3_cabinet
| | img2.rgbd
| | img2.png
| | ...
...
ROBOT_ENV
must be set to a world name defined in ed_object_models
(my-lab
) and in robocup_knowledge
.
my-lab/table
) in a composition defined in my-lab/model.yaml
(see the ED tutorials, specifically tutorials 1-5). This description is automatically loaded when the annotation GUI starts.robocup_knowledge/src/robocup_knowledge/my-lab
must contain a python database of the objects that you want to annotate (typically the objects that the robot needs to be able to recognize in my-lab
).roscore
, and runrosrun ed_perception load_object_types
Now you can start the annotator with the path in which you stored the images:
rosrun ed_perception annotation-gui /some/path
and cycle through the images using the arrow keys.
If you don't have auto-annotation of the supporting object, the segmentation is probably pretty bad. This will however change once you add the supporting furniture.
my-lab/table
. You should see the entity name appearing.
Go through all the images and annotate the supporting entity.
By default, the area that is used for segmentation is on_top_of
. You can however specify another area. If you want this:
area:<name-of-area>
, for example area:shelf1
and press enter
Once an image has the supporting entity annotated, segmentation should improve. Now you can annotate the rest objects in the images, using the same process:
If you encounter any images that are useless for annotation/training, it is possible to exclude them. If you do that, the image crawler will skip the image in the future, in the GUI as well as while training or testing. Excluding an image is done by typing exclude, pressing enter and clicking in the image. This can only be undone by manually opening the json file with the metadata of the excluded image and setting exclude to false.
You can always exit the annotation-gui by pressing ESC. Your progress will be saved (in the json-meta-data files).
Once you have a decent number of annotations of all the objects you want your robot to recognize, you might want to train a neural network using the segments containing the annotated objects. Good news! You can do that! By running
rosrun ed_perception store_segments /path/containing/annotated/images/ /target_directory/
the annotated segments are cut out and stored in the following directory structure:
target_directory
|------ coke
| | img1.png
| | img2.png
| | ...
|
|------ toilet_paper
| | img1.png
| | img11.png
| | ...
...
Go to the package in which the perception modules are stored:
roscd ed_perception_models/models
Create a new folder with the name of the current environment:
mkdir $ROBOT_ENV
Within this directory, create a file called parameters.yaml
with a content like this:
modules:
- lib: libcolor_matcher.so
classification:
color_margin: 0.02
- lib: libsize_matcher.so
classification:
size_margin: 0.01
This determines which perception modules are used, with which parameters. Now you can train the models for these perception modules, using the train-perception
tool. This takes two arguments:
rosrun ed_perception train-perception <config-file> <image-file-or-directory>
So in our case:
rosrun ed_perception train-perception $(rospack find ed_perception_models)/models/$ROBOT_ENV/parameters.yaml /path/to/images
rosrun ed_perception test-perception $(rospack find ed_perception_models)/models/$ROBOT_ENV/parameters.yaml /path/to/images