osrf / srcsim

Space Robotics Challenge
Other
9 stars 4 forks source link

gazebo head frame or ROS head frame? #29

Closed osrf-migration closed 7 years ago

osrf-migration commented 7 years ago

Original report (archived issue) by dan (Bitbucket: dan77062).


Since the camera is mounted upside down, we receive inverted images. The standard tf transform from camera to head frame in ROS does not take that into account. It makes sense that we should report values that refer to the head frame in Gazebo. In that case we need to invert the transform from ROS.

For the XYZ coordinates in qual1, should we submit the values for the Gazebo head frame or for the ROS head frame?

osrf-migration commented 7 years ago

Original comment by Nate Koenig (Bitbucket: Nathan Koenig).


We will accept values in either frame.

osrf-migration commented 7 years ago

Original comment by dan (Bitbucket: dan77062).


OK, thanks for the quick feedback.

osrf-migration commented 7 years ago

Original comment by dan (Bitbucket: dan77062).


Thinking about this more and also looking at Issue #22, I'm not sure I made my question clear. The image below shows that the transforms do not take into account the fact that the image is inverted. The +z axis (blue) of the head frame is up relative to gravity but the lower part of the console in the point cloud is shown as "above" the head. The question is: should we invert the image and return the correct physical coordinates in the world or should we use the published transforms with the result that the coordinates look correct in RViz but are in fact not physically where the light is with respect to the robot's head.

qual1_inverted_image.png

osrf-migration commented 7 years ago

Original comment by Nate Koenig (Bitbucket: Nathan Koenig).


That is a good point.

You should invert the image so that the reported light location is physically accurate relative to the head frame. I'll update the tutorial as well.

osrf-migration commented 7 years ago

Original comment by Rud Merriam (Bitbucket: rmerriam).


I'm concerned that the different inversions used by competitors are going to produce results that are valid but not correct as judged by the competition. Can either another reference frame be used that handles the inversion or a standard ROS inversion process be provided?

osrf-migration commented 7 years ago

Original comment by dan (Bitbucket: dan77062).


Thanks for the clear guidance. Rud is right that there probably needs to be a standard way to invert the image. However, I am fine with waiting for the scoring script and seeing how it goes with just inverting the image directly.

osrf-migration commented 7 years ago

Original comment by mocorobo (Bitbucket: mocorobo).


What exactly is the "head frame"? where is the 0,0,0 coordinate. I don't see any reference to the head in the world model in Gazebo's left hand panel. When I click on "upperNeckPitchLink" I see a box that surrounds the head. Is that the "head frame"? If so where is 0,0,0, the center, a corner?

Also, I get an offset from the stereo depth data depending on which eye is reference. Are there some dimensions of the head/multsensor to be able to calculate accurately the total offsets.

osrf-migration commented 7 years ago

Original comment by Louise Poubel (Bitbucket: chapulina, GitHub: chapulina).


@mocorobo , you're right that there's no head frame within Gazebo. For this task's purpose, you can use the upperNeckPitchLink frame. You can visualize it in Gazebo:

headframe.png

You can find all the offsets in the robot's description.

tl; dr

The head frame disappears for Gazebo during the conversion from URDF to SDF. But you can still use it in RViz.

osrf-migration commented 7 years ago

Original comment by dan (Bitbucket: dan77062).


Is that the same as the ROS upperNeckPitchLink frame? Because I tried using that, generated an answer file, ran it through the score test and it was not correct. Is there some other transform needed to go from the ROS frame to the Gazebo frame?

I originally tested my results by inverting the image (as Nathan says to do in the earlier comment in this issue), then calculating the location of the light, transforming the results in ROS from "left_camera_optical_frame" to "head," including the required axis change for going from a camera image to a world location. I then generate a marker using that location, publish it, and see in rviz that the marker exactly overlies the 3d point cloud location of the light. That works fine, looks perfect.

However, with the inverted camera image and the inverted head frame and the not inverted upperNeckPitchLink frame, and maybe differences in ROS vs Gazebo frames, it is pretty hard to sort out how to report the results.

osrf-migration commented 7 years ago

Original comment by Louise Poubel (Bitbucket: chapulina, GitHub: chapulina).


The wiki has been updated with some clarifications about the head frame:

https://osrf-migration.github.io/srcsim-gh-pages/#!/osrf/srcsim/wiki/qual_task1