personalrobotics / ada_meal_scenario

A set of scripts for a meal serving scenario using Ada.
1 stars 4 forks source link

Create an OpenRAVE sensor for the camera on ADA #7

Open siddhss5 opened 9 years ago

siddhss5 commented 9 years ago

The correct way to get the camera transform is not to hardcode it but to programatically obtain it from the OpenRAVE robot by attaching an OpenRAVE sensor to the robot model.

mklingen commented 9 years ago

There's been a frame attached to the robot model for some time for the softkinetic sensor. AFAIK @Stefanos19 has been hardcoding offsets instead of using this for the "bite serving" demo. He was using it for the demos involving april tags.

Stefanos19 commented 9 years ago

So, the way I have it now is I get the transform of the OpenRAVE sensor model (that's link[7]): https://github.com/personalrobotics/ada_meal_scenario/blob/master/src/bite_serving_FSM.py#L168

The hardcoded offsets are for the pose of the fork, relative to the camera.

siddhss5 commented 9 years ago

Isn't this the wrong way to go about this? Shouldn't we be attaching an OpenRAVE sensor to the robot model and requesting its transform?

Stefanos19 commented 9 years ago

Oh I see, so you mean attach it as a sensor, instead of being a link - part of the robot. Right now it is part of the robot model as: https://github.com/personalrobotics/ada/blob/master/ada_description/ordata/robots/mico-modified.robot.xml#L248

psigen commented 9 years ago

Both attaching it as a link and attaching it as a sensor will allow you to get the transform from the robot model instead of hardcoding anything.

You could potentially attach the fork to the robot as a grabbed body or another link and then not have to hard code any transforms that way.

@siddhss5: Is there any real point to using an OpenRAVE sensor specifically? All it seems to add is a fairly limited API to store data in one of the 8 hardcoded types and add a bunch of functions that are largely redundant/unnecessary if we are using ROS nodes already. We are already using RViz for visualization, so we don't get any benefit from using the OpenRAVE::SensorBase rendering capability. http://openrave.org/docs/latest_stable/coreapihtml/arch_sensor.html

mklingen commented 9 years ago

I agree that the SensorBase is unnecessary here, as its redundant with the ROS stuff. Speaking of ROS stuff, we still haven't moved the ADA model to URDF instead of OpenRAVE kinbody xml.

mkoval commented 9 years ago

:+1: for moving to the URDF model. This just burned us in https://github.com/personalrobotics/ada/issues/6.

mkoval commented 9 years ago

Also, I agree with @psigen and @mklingen here. The OpenRAVE SensorBase API is really bad because you have to implement a bunch of obscure functions that are note useful in practice. I implemented a few custom sensors for HERB (for the F/T sensor, strain gauges, and tactile pads) and they ended up being largely useless because of this.

I'd very much rather write some custom C++ code and wrap it with Boost.Python. In this case, we can define a bogus link in the URDF to store the extrinsics. I think this is the route @mklingen is already taking with the off-screen rendering.

@mklingen Can you confirm?

mklingen commented 9 years ago

@mkoval I am using the SensorBase class in OpenRAVE for offscreen rendering right now. It is very cumbersome but it works for me. For instance, I have to constantly check for SensorType flags and dynamically cast everywhere, and have to interpret one of the six valid commands (power on, power off, render, etc.). It also supports exactly one image type (8-bit RGB stored as a std::vector<uint8_t>).

It's a different story when you've got a ROS camera. ROS already has a very extensive pipeline for dealing with cameras. Essentially, all you need to do is define a TF frame in your robot model. That gives you extrinsics. Then, you get intrinsics from CameraInfo which is published by the camera driver. The images are published in ROS. All of these things already have Python interfaces.

However, it might be worthwhile someday to write some glue for our robots in prpy to communicate with ROS cameras.

siddhss5 commented 9 years ago

The only reason I want it in openrave is so that I can programmatically access it in a planner or in Python. For example, get an image out of it or use it for visibility computation.

If you can enable that otherwise, then I'm all good.

Stefanos19 commented 9 years ago

@psigen One issue with creating a fork model and setting the transform (instead of hardcoding the offset), is that everytime we place the fork, there are small differences in the actual offset based on the fork position. Additionally, sometimes the offset will change if the fork is moved after a grasp.Therefore, we run a few trials beforehand to tune the offsets so that they match the actual fork position. It appears easier to change the offsets in the code or an input file, instead of in the robot model.

Stefanos19 commented 9 years ago

We are now using the StructureIO sensor, as embedded in the mico.urdf file.

mkoval commented 9 years ago

I don't think this addresses the issue of programmatically accessing the camera (e.g. rendering images in simulation). We still need to create an OpenRAVE SensorBase (or something similar) and attach it to the robot.

Stefanos19 commented 9 years ago

Sorry, got too excited.