Closed bellekci closed 9 years ago
And in protonect from libfreenect2, I have windows appearing and also successful visual data.
What do the log messages for depth and color say? What it the frame rate of both? The windows are created as soon as the first frame is received. If no windows are created, then it is still waiting for frames to arrive.
That was the thing! I did not get any warning or error. And frames were arriving just fine. Anyway, I did a clean install of ros. I think it was due to somehow messed up compressed_image_transfer plugin. I have just successfuly run color calibration.
When kinect2_bridge is run, no window should appear right?
Danke und Grüsse
Good to hear that it's working now. Yes, kinect2_bridge does not open any windows.
Well, now I got a segmentation fault. I have completed calibration steps until depth calibration. In depth calibration, I get the following error:
/home/bellekci/ros_catkin_ws/devel_isolated/rosbash/bin/rosrun: line 4: 27288 Segmentation fault (core dumped) "/home/bellekci/ros_catkin_ws/src/ros/rosbash/scripts/rosrun" "$@"
I tracked the error and it is due to this line in compareDists function:
<< " median: " << diffs[diffs.size() / 2] << std::endl;
The reason it gives segmentation fault is that diffs.size is 0. Because imageDists.size is 0.
I have seen that nothing gets added to the imageDists and depthDists vectors in computePointDists function.
It is probably due to the two if statements which force for loop to skip.
In other words, all of my data returns true in at least one of the next two conditions :
if(dDist < 0.1)
if(std::abs(diff) > 0.08)
I think I am making progress. You have probably thought that distance difference cannot be larger than 8cm and thus placed the if(std::abs(diff) > 0.08)
check.
In my case, diff is always larger than 2m. The reason is that planeDistance is calculated as 3.32395m and the chessBoard is about at 1.3m which is fine. So, planeDistance is not correct.
Miscalculation of the planeDistance is very probably due to the fact that my kinect2 is looking downwards and ground plane is recognized instead of the chessboard plane.
I have seen that you are getting the region-of-intereset with your computeROI function but this does not seem to correspond to the chessboard, am I right? Plus, the output ROI is not used in getPlane function. So, solvePnPRansac in getPlane function must be returning a faulty rotation vector in my case and finally a wrong distance to the plane.
So a possible fix for me would be filtering out the ground plane in depth data with a z distance filter before recording. Do you have any idea how to do it with opencv in your recorder? Changing how kinect2 is mounted is unfortunately not possible at the moment.
So I have placed chessboard paper at different positions on ground and repeated the calibration steps. However, I am still getting 3.5 m as the planeDistance and that is wrong. I have no idea why.
Could it be that you passed a wrong size for the pattern? What pattern are you using, what are the dimensions, and to what was the parameter on the command line set? The plane is calculated from the points on the pattern, it does not detect anything else. The two if statements are used to filter out invalid points (noise). The first one filters out everything that is near zero and the second one filters out all depth values that are not in between +/- 8 cm around the calculated distance. It is also important to run the ir calibration prior to the depth calibration.
I do not think so. I have used the one you suggested chessboard5x7x0.03. I have also confirmed that the size of edges are 0.03 as requested. I have just uploaded my last samples as a tarball and here is the link:
https://drive.google.com/file/d/0B6WiFqTOIhcfVEg0dm9zYUxQckE/view?usp=sharing
You can give it a try and let me know if my data is somehow wrong. Thanks.
I looked at the files and noticed that there is something wrong. The intrinsic parameters of the color and the ir sensor are totally off. For the ir the focal length should be ~366 and ~1060 for the color sensor. Your results are 920 and 5000. This would explain why the depth calibration has problems to find valid points. I don't know why it found values that are so big. I just copied the intrinsic parameters of one of the kinects I use, and the depth calibration works fine. There are a couple of things you could try. Glue the pattern on a flat non-flexible plate, for example with a pritt stick. Take more images, from different distances (0.6 to 1.3 m) and with different rotations, not just rotation around z, also around x and y axis. You could use a tripod to hold the pattern. And try to place the pattern also near the corners of image area.
OK I will give it another go. Do you think would it make a difference to move the camera closer to or away from the object instead of moving the object?
That works as well.
I have collected 50 new frames and repeated color calibration.
I have rotated in x y domain but also in z domain too meaning the pattern was not parallel to the camera in some conditions as you have said. The results I am getting are as follow: error: 0.244091
Camera Matrix: [1122.954624015307, 0, 902.124371051898; 0, 1121.299450091302, 545.3518412949537; 0, 0, 1] Distortion Coeeficients: [0.05917318195316226, -0.0222640363528383, 0.001712035826921513, -0.00503893035660427, -0.01847429183100217]
so fx is 1123. Is that fine ?
I have also collected 100 new frames for IR as you said but did not get good results.
error: 0.102318
Camera Matrix: [1086.418288331477, 0, 257.5723631936668; 0, 1094.2196422849, 212.6904688614918; 0, 0, 1] Distortion Coeeficients: [0.9994792108812481, -22.85798595131196, -0.006675062443077072, -0.02544958247078666, 60.7512906858388]
I do not know what is wrong. For reference, here is the link for my samples:
https://drive.google.com/file/d/0B6WiFqTOIhcfVk5TUGRaNDhjQm8/view?usp=sharing
OK. I got it. I have redone everything but this time in every sample I have inclined the chessboard. In previous trials, I was mostly keeping chessboard parallel to the camera because I thought that would result the best calibration results.
After I have done calibration, the focal length of ir is calculated as 356.
Hi. I looked at your calibration images. I think the main problem is that on the IR pictures, your calibration pattern seems to always be on the same plane. You keep rotating them around the center of the calibration pattern, but that has little effect. On the RGB images, that is also mostly true, except for 3 or 4 where the calibration pattern was inclined respect to the camera, you want more of those.
Please watch this video: https://www.youtube.com/watch?v=iEjH244KRbw
On that video, they use the new Halcon calibration patterns, that should completely cover the image. Ignore that part, using the normal OpenCV calibration pattern, the pattern should cover from approx 20% to 90% of the image. (So you have to get closer and farther away from the sensor).
The most important thing to learn about that video is how you have to tilt the pattern relative to the camera. They use a little wedge, and get images where the pattern has been tilted around the four directions. We tilt our calibration patterns even more, maybe 40 degrees.
Another detail is that for calibrating the intrinsics of the IR (The first step, that seems to be giving you problems), you can hold the calibration pattern in your hand.
Please try getting better calibration images, and run the calibration again.
Greetings!
Thanks @amaldo for the help.
I have done what you have just suggested (as I have stated just before you posted) but now I started to think that maybe I have done too much tilting. I am pretty sure that I have tilted more than 40 degrees in many cases. Would that also harm the results? 356 is still a bit off from 366 as reported in Table 9 in this paper: http://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-5-W4/93/2015/isprsarchives-XL-5-W4-93-2015.pdf
I think that is close enough. These sensors can be a bit different from each other, that is why we do the calibration.
If you can still see the complete pattern in all the images, you should be fine. If you have inclined some of them so much that the intersections between black and white are no longer visible, those images will probably get thrown out by the calibration software anyway.
The important thing is that the calibration pattern is really flat. Companies usually use profesionally made patterns, glued on a sandwhich of aluminum and foam, to ensure flatness.
So check that your glue is not coming off, or that you have bubbles.
Also good lighting on the RGB images (soft light, no bright reflections on the images).
If you upload your pictures again, I can have a look to tell you if they look ok.
Greetings!
356 seems to be much better then before. With the latest calibration of one of our sensors I got 365 for fx and 363 for fy. The sensors are not all the same, but they should result pretty similar values. So there is probably space for improvements. Rotations of more then 40 degrees should be fine as long as the pattern is still recognized and the detected point really lie in the corners of the pattern (should be visible in the GUI). It might help to imagine a grid of for example 5x4 field which overlays the image. Then position the pattern in the top left field, take a image and move it to the next field. At the end of one row, move it on field down and take images from that row, etc. Then rotate the pattern in a different manner (or place it at a different distance) and repeat it again, and again, and again. This usually results in a good calibration.
I have glued it onto a flat board. Here is my data: (100 images for color and ir)
https://drive.google.com/file/d/0B6WiFqTOIhcfUFgyelo2VnhDcVU/view?usp=sharing
I have re-run kinect2_bridge after calibration but data is not meaningful. I suppose my calibration was still not properly done.
What do you mean with "not meaningful"?
Well, there is nothing visible in the point cloud when I run viewer.
I have also incorporated the calibration results to protonect example and overwritten the IrCameraParams and the resulting point cloud is some weird data. Probably, those weird points get filtered in your viewer.
The viewer has a point cloud and a image view. How does the image look like? Do the borders of objects fit to the depth image? Is far as I know Protonect
is not showing any point clouds.
Did you copied the calibration files to kinect2_bridge/data/<serialnumber>/
and restarted kinect2_bridge
?
Yes , I did copy and restart the kinect2_bridge. In the beginning, I can see that my calibration data is read with following print in terminal:
camera parameters used: camera matrix color: [1051.847775732713, 0, 962.2835784114121; 0, 1053.880705503419, 549.1332859684801; 0, 0, 1] distortion coefficients color: [0.04057789315071465, -0.04128195232054094, 0.001108985127145222, 0.003555978800049442, 0.002421125128892641] camera matrix ir: [356.2164615979822, 0, 253.4414729844542; 0, 356.5138167176692, 206.2495481460727; 0, 0, 1] distortion coefficients ir: [0.09359137443536884, -0.2824486250826749, 0.001094532756217535, -0.001267718882340789, 0.1210202181680398] rotation: [1, 0, 0; 0, 1, 0; 0, 0, 1] translation: [-0.052; 0; 0] depth shift: -32.2154
Yes, protonect does not have point clouds but I am converting the data to pointcloud. IR image is completely dark blue. No detail.
You probably mean the visualization of the depth image in the registration viewer is all blue, right? On your last images there was depth information in the images. You could check if the images are all zero with rostopic echo -n 1 /kinect2/depth/image
. Could you provide some screenshots? You can create some with the registration viewer by pressing SPACE
or s
.
True, depth image was all blue. Depth images seemed fine but I will let you know if they were all zero and provide screenshots.
I managed to get meaningful calibration results by switching to a different checker board. The one with the smallest square size (chess9x11x0.02.pdf). I have also compared my calibrated data and windows sdk. First of all, I have checked if I am getting the same 3d view with libfreenect2 when I use the intrinsic parameters from windows sdk. There must be some discrepancy between windows sdk and libfreenect2 calculations because in windows SDK, the shape of the image is more rectangular but with libfreenect2 image is more bended, i.e. corners are stretched. I have also checked whether my calibration data fixed this situation. Although, it seemed better than windows intrinsics, it was still worse compared to the windows SDK. I will try add some snapshots.
Hi,
Thanks for the toolbox. I wanted to try out the calibration and I have followed your instructions. However, even tough kinect2_bridge starts properly and I do get expected messages in terminal. No image viewer windows appears. I mean there is no visual.
I tried to start the registration viewer. It says starting receiver but again no window.
Do you have any idea what might be the cause ?