nucobot / RealSense

ROS driver for RealSense depth camera
MIT License
7 stars 8 forks source link

static_transform_publisher #1

Open qazmichaelgw opened 9 years ago

qazmichaelgw commented 9 years ago

I read the realsense_cam.launch file. I have a question about the parameters that you set in static_transform_publisher. It seems that different camera has different parameters. My camera is Create Model: VF0800. But how to set these parameters? Thanks a lot!

ncos commented 9 years ago

That is very true: unfortunately, RGB and depth cameras on VF0800 have different specs. However, we had the same camera as you do and we got our calibration params by dancing around with a chessboard image as described here: calibration. We repeated this process for both IR and RGB cameras and put the results in realsense_depth_calibration.yaml and realsense_rgb_calibration.yaml files in 'calibration' directory. If you want to update the calibration you should modify those files as they are directly used by rectification nodes and needed for rgb to depth matching. In addition, some parameters are set directly in realsense_cam.launch, but those are pretty straightforward. You may also find it very useful to read about kinect drivers and it's calibration, as there are a lot of things we borrowed from there.

I hope I've answered at least some of your questions! Sorry for keeping you await for so long. If you still have any troubles, please, don't be shy to write here!

qazmichaelgw commented 9 years ago

Hello, Thanks for your answer. But I still have some questions. How do you get the driver of VF0800 since this driver for realsense has nothing to do with IR. In that sense, how do you calibrate the depth_calibration. Another question is that the resolution of rgb is 1920x1080. image_width: 1920 image_height: 1080 camera_name: realsense_rgb_camera camera_matrix: rows: 3 cols: 3 data: [1553.861074, 0, 956.87628, 0, 1555.212615, 504.675759, 0, 0, 1] distortion_model: plumb_bob distortion_coefficients: rows: 1 cols: 5 data: [0.123012, -0.241866, -0.006008, -0.002488, 0] rectification_matrix: rows: 3 cols: 3 data: [1, 0, 0, 0, 1, 0, 0, 0, 1] projection_matrix: rows: 3 cols: 4 data: [1572.819702, 0, 951.696008, 0, 0, 1580.477539, 499.454947, 0, 0, 0, 1, 0] is my calibration results. However, I found that the rgb image and depth do not align well in this situation. Finally, is still the static parameters. You set:

for kinect. Could you tell me how to find these parameters of VF0800. Thanks a lot for your cooperation!

ncos commented 9 years ago

1) RealSense camera publishes it's IR and RGB images separately, utilizing different linux devises: /dev/video1 and /dev/video2 by default, but this may vary and you should specify the devices manually for your system configuration. The RGB image is processed by the 'usb_cam' node (standard for ROS), while the IR is managed by 'realsense_cam' node. So, you have both RGB and IR image topics published (you can view IR as greyscale in Rviz) and both can be redirected to serve as an input for the calibration script. (question 2) 3) Yes, right, I do not remember why our configs for rgb image have 640x480 resolution, yet you unlikely need the 1920x1080, as the IR itself is only 640x480. How about downscaling? 4) Finally, all the static transforms were acquired by direct measurements of the camera. However, I do not know what the static transforms you are talking about mean. Are they a part of some standart? Which package needs them?

BlazingForests commented 9 years ago

hi ncos

As you said "while the IR is managed by 'realsense_cam' node. So, you have both RGB and IR image topics published (you can view IR as greyscale in Rviz)", But i do not see the IR image in RVIZ, How can i get the IR view?

thx

ncos commented 9 years ago

Well, yes, that's where I was a little wrong. By 'IR' in my previous post I meant the pointcloud image, sorry for misleading you). It is theoretically possible to get IR from the camera, but the interface bandwidth allowes for only two pictures to be transferred. The default for this camera is RGB and Depth. This behaviour can be overridden by applying appropriate system calls, I am sure, but we had no time to investigate on this matter as we did not actually need the IR.

As for calibration, we played a little trick here - we recorded the IR video (with calibration chessboard) under Windows (using SDK) and run the calibration script on Linux streaming that video to the input topic.

If you have any luck enabling camera controls (like choosing between RGB/Depth/IR, framerates and resolutions) feel free to make a pull request!