RobotLocomotion / LabelFusion

LabelFusion: A Pipeline for Generating Ground Truth Labels for Real RGBD Data of Cluttered Scenes
http://labelfusion.csail.mit.edu
Other
387 stars 98 forks source link

supporting custom camera parameters #17

Open dmsj opened 7 years ago

dmsj commented 7 years ago

@peteflorence Right now LabelFusion has the camera intrinsics used by vtk hardcoded to the default parameters for the Asus xtion. See rendertrainingimages.py in setCameraInstrinsicsAsus(view) which is called in def initialize(self)

Therefore all new data collected with LabelFusion records poses which are in the Asus camera frame rather than the frame of the sensor used to collect the data. For example, see the results of a few test cases below:

image

I'd guess this is an easy fix? - require the user to provide the camera intrinsics for the sensor in use.

@5yler made a quick script to post-process the LabelFusion poses to fix this in the data we already collected, so we only need a fix going forward.

peteflorence commented 7 years ago

Thanks for making the request. We should definitely support setting intrinsics in a config-type file, and make sure all stages of the pipeline are sourcing those same intrinsics.

5yler commented 7 years ago

@peteflorence The other place to set intrinsics appropriately (besides what @dmsj already mentioned) is in prepareForObjectAlignment.py:

# call ElasticFusion
os.system(path_to_ElasticFusion_executable + " -l ./" + lcmlog_filename)

should change to

os.system(path_to_ElasticFusion_executable + " -l ./" + lcmlog_filename + " -cal " + calibration_filename)

where calibration_filename is a file containing one line with fx fy cx cy in order.

patmarion commented 7 years ago

Yeah we should update the code to read the values from the camera .cfg file. I need to check how to do this from Python, hoping it's just a small code change. In the meantime, please modify the values in the Python code before generating label files.

I'm not sure I understand what you mean about post processing the poses?

On Wed, Oct 11, 2017 at 7:12 PM Pete Florence notifications@github.com wrote:

Thanks for making the request. We should definitely support setting intrinsics in a config-type file, and make sure all stages of the pipeline are sourcing those same intrinsics.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/RobotLocomotion/LabelFusion/issues/17#issuecomment-335999969, or mute the thread https://github.com/notifications/unsubscribe-auth/AAFioN3pXQJO0WySilTMpXaSWa-bu4ykks5srXWGgaJpZM4P2VHO .

5yler commented 7 years ago

@patmarion Object pose in camera frame depends on the camera parameters used, and you can transform pose from one set of camera parameters to another - see example:

# (1) camera calibration the pose was created with
fx_1 = 525.0
fy_1 = 525.0
cx_1 = 319.5
cy_1 = 239.5

# (2) new camera calibration to transform pose into
fx_2 = 514.53821916
fy_2 = 513.98831482
cx_2 = 311.53091858
cy_2 = 254.08105136

# transform object pose between (1) and (2)
# z is constant
x_2 = (1.0 / fx_2) * (fx_1 * x_1 + (cx_1 - cx_2) * z)
y_2 = (1.0 / fy_2) * (fy_1 * y_1 + (cy_1 - cy_2) * z)

Then (x_1, y_1, z) using camera parameters (1) and (x_2, y_2, z) using camera parameters (2) achieve the same projection in pixel space.

laurimi commented 6 years ago

Playing around with Kinect2 data, I noticed that also the size of the output images is now fixed to 640 by 480, see rendertrainingimages.py:

view.setFixedSize(640, 480)

While resolving the issue of setting the calibration parameters, this could also be addressed.

EricCousineau-TRI commented 6 years ago

I may start to play around with this with different intrinsics; will post here if I am able to resolve anything (or if I'm hardcore struggling).

hpf9017 commented 5 years ago

Yeah we should update the code to read the values from the camera .cfg file. I need to check how to do this from Python, hoping it's just a small code change. In the meantime, please modify the values in the Python code before generating label files. I'm not sure I understand what you mean about post processing the poses? On Wed, Oct 11, 2017 at 7:12 PM Pete Florence @.***> wrote: Thanks for making the request. We should definitely support setting intrinsics in a config-type file, and make sure all stages of the pipeline are sourcing those same intrinsics. — You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub <#17 (comment)>, or mute the thread https://github.com/notifications/unsubscribe-auth/AAFioN3pXQJO0WySilTMpXaSWa-bu4ykks5srXWGgaJpZM4P2VHO .

@patmarion do you add this feature which gets the camera intrinsics from camera.cfg file, or I still need to modify the parameters as https://github.com/RobotLocomotion/LabelFusion/issues/17#issuecomment-336002030 since when I get my own data with d435i, the icp result is not very good. I think maybe this is reason if it still uses the Asus camera intrinscis.

tiexuedanxin commented 4 years ago

@peteflorence The other place to set intrinsics appropriately (besides what @dmsj already mentioned) is in prepareForObjectAlignment.py:

# call ElasticFusion
os.system(path_to_ElasticFusion_executable + " -l ./" + lcmlog_filename)

should change to

os.system(path_to_ElasticFusion_executable + " -l ./" + lcmlog_filename + " -cal " + calibration_filename)

where calibration_filename is a file containing one line with fx fy cx cy in order.

sorry to bother you, could you tell me where should I put the calibration_filename file?