tudelft-iv / multi_sensor_calibration

487 stars 105 forks source link

Question about Mono_detector #12

Open abeyang00 opened 4 years ago

abeyang00 commented 4 years ago

First of all, thank you for the wonderful work you have done here! I have a question regarding the mono_detector though.

My calibration results are as follows calibration

Lidar and Radar results seem to be fine but there's something wrong with camera result.

I'm currently using mono_detector and was wondering if it's even possible to get depth value from mono camera.

I don't think the parameters from intrinsic.ini are used...

If you can let me know, it would be greatly appreciated.

Thanks in advance

dejongyeong commented 4 years ago

Hi @abeyang00 I’m trying to use this for calibrating lidar, radar and mono camera too but don’t know where to start. I tried reading on the documentation about the topic subscribe, how could I modify the detector to match the topics? Much appreciated if you have any advice on that. Thanks.

RonaldEnsing commented 4 years ago

I'm currently using mono_detector and was wondering if it's even possible to get depth value from mono camera.

This is possible since the 3D object shape and the camera intrinsics are known, e.g. using https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html#solvepnp

I tried reading on the documentation about the topic subscribe, how could I modify the detector to match the topics?

You could use a remap: http://wiki.ros.org/roslaunch/XML/remap

dejongyeong commented 4 years ago

Hi @RonaldEnsing,

Thanks for getting back to me. The radar detector subscribes to the topic that has a message type: radar_msgs::RadarDetectionArray but the rostopic list output from the umrr radar driver is of type pointcloud2:

image

Will this work with the multi_sensor_calibration package? I'm kinda lost on here and would like to get some pointers.

Thanks.

dejongyeong commented 4 years ago

@abeyang00 @RonaldEnsing did we have to modify codes in order to use mono detector? Thanks.

parsep commented 4 years ago

First of all, thank you for the wonderful work you have done here! I have a question regarding the mono_detector though. My calibration results are as follows

Lidar and Radar results seem to be fine but there's something wrong with camera result. I'm currently using mono_detector and was wondering if it's even possible to get depth value from mono camera. I don't think the parameters from intrinsic.ini are used... If you can let me know, it would be greatly appreciated. Thanks in advance

did u manage getting the camera in right location compared to ur other sensors? it seems like I have same happening with the mono camera

RonaldEnsing commented 4 years ago

I'd suggest that you make sure that the detections in the image appear correct. This may require a bit of tweaking on the detection parameters. Code modifications should not be necessary.

parsep commented 4 years ago

I'd suggest that you make sure that the detections in the image appear correct. This may require a bit of tweaking on the detection parameters. Code modifications should not be necessary.

thanks for the reply :) btw really intresting work!

do you mean the circle detections? if so, visually they look fine, I can see 4 circles with fairly good overlapping of pattern circles.

RonaldEnsing commented 4 years ago

Yes. If the circles visualization looks fine then you may want to check if your camera is properly calibrated.

dejongyeong commented 4 years ago

@RonaldEnsing thanks for the reply. I ran both lidar and mono detector, accumulator and optimizer. The optimizer results shown stereo instead of mono.

@parsep how did u manage to feed mono detector into the optimiser? Thanks

parsep commented 4 years ago

@RonaldEnsing I got the mono_detector_cli node and based on that just fed camera images into the detector -> accumulator -> optimizer. would this answer ur q ?

dejongyeong commented 4 years ago

@parsep Thanks for the reply. I ran the “rosrun mono_detector mono_detector_node” as well as lidar and i fed into accumulator then optimizer. But the results output were lidar stereo and radar. How could I get the optimiser output for mono instead of stereo. Thanks

parsep commented 4 years ago

@parsep Thanks for the reply. I ran the “rosrun mono_detector mono_detector_node” as well as lidar and i fed into accumulator then optimizer. But the results output were lidar stereo and radar. How could I get the optimiser output for mono instead of stereo. Thanks

I suspect u need to do anything there. once u stop accumulator, toggle cmd, in the accumulator log do u see ur camera being loged as expectd ?

dejongyeong commented 4 years ago

@parsep The mono_detector_node detects something and when I print out the mono detector topic, there’s something recorded. The accumulator received the mono detector. One thing that I noticed is that in the optimiser code, there’s only output for stereo lidar and radar.

The optimiser client.py. Should i change that to mono?

4004F781-EFFC-4C9F-A2C2-21601F4F597A

17DDB34B-9A12-4B76-9B03-C475476C72C0

parsep commented 4 years ago

@parsep The mono_detector_node detects something and when I print out the mono detector topic, there’s something recorded. The accumulator received the mono detector. One thing that I noticed is that in the optimiser code, there’s only output for stereo lidar and radar. The optimiser client.py. Should i change that to mono?

I normally run server.py and call service call to optimize ("accumulator/optimize" ) I am gonna check the code u sent and get back if I find something

dejongyeong commented 4 years ago

@parsep thanks and much appreciated. One thing that I noticed is that u said executed the command “mono_detector_cli” right? Is there a difference compared to the command “mono_detector_node”? Thanks

parsep commented 4 years ago

@parsep thanks and much appreciated. One thing that I noticed is that u said executed the command “mono_detector_cli” right? Is there a difference compared to the command “mono_detector_node”? Thanks

well it is then independant of camera module and processes image files, thats one difference, the image processing bits seem the same to me

parsep commented 4 years ago

@RonaldEnsing it seems the solvepnp requires the same order of imagepoints and object Points to work correctly, my friend figured it out!

dejongyeong commented 4 years ago

@parsep for the mono camera detector, I tried executes command based on the documentation, but instead of stereo, I executed mono camera. After passing through accumulator and optimizer, only results for lidar shown. The others were stereo camera and radar sensor. Would you please let me know how you got mono camera work? Thanks.

parsep commented 4 years ago

OK

@parsep for the mono camera detector, I tried executes command based on the documentation, but instead of stereo, I executed mono camera. After passing through accumulator and optimizer, only results for lidar shown. The others were stereo camera and radar sensor. Would you please let me know how you got mono camera work? Thanks.

OK, can u make sure once u run detector from command line it actually publishes any thing? like check rostopic echo "the correct topic of mono detector". BTW, the monodetector cli doesnt publish anything offthe shelve so u have to change it in such a way that it pulishes the detection (if u havnt already, have a look in the mono detector to see how its done)

dejongyeong commented 4 years ago

@RonaldEnsing it seems the solvepnp requires the same order of imagepoints and object Points to work correctly, my friend figured it out!

@parsep would like to know how did you resolve the issue of making the camera in the right position? (shown in the 3D visualisation)

image

The most btm TF is the camera but in our setup, the camera is above the both lidar and radar as shown below.

image

Looking forward to hear from you.

parsep commented 4 years ago

@dejongyeong one suggestion can be to make sure that the distance measurements in accumulator seem reasonable e.g. knowing ur relative camera-lidar positions, make sure xyz measurements for lidar an camera are meaningful check ranges for example if u cannot make correct correlation between xyz respectively. If u find either being wrong then u have lead to investigate…

parsep commented 4 years ago

@dejongyeong the order of detected circles centers from camera worths investigating also if u haven't already

dejongyeong commented 4 years ago

@parsep thanks for getting back to me. I noticed on the accumulated lidar and camera does not make correct correlation (i'm not sure and needs further investigate).

image

I'm not sure should the value of Z be the value of X etc. Do you have to modify the object_points.yaml file in mono detector configuration?

Thanks and looking forward to hear from you.

parsep commented 4 years ago

@dejongyeong that seems reasonable in the first glance yes the x of lidar can corespond to z of camera. Looking into the image and assuming from top left corner x goes righ y down and z in, do u have meaningful object points in the yaml file ? Ofc it shall match the pattern physically, I mean the center spacings.

I recall optimizer having a "reorder" or similar parameter perhaps change that, but i m randomly suggesting this not sure if the problem is something else. Btw try running it with 2 sensors only with out radar if it makes any diff.

From the image u posted above it's hard for me to understand the scene composition so maybe simplify that too, fewer positions, so that u can compare it with reality easier if u think it can help

dejongyeong commented 4 years ago

@parsep I used the default object points in the given mono config file. Is there any thing that I should change? I presume the xyz in object points yaml file represents the circles position (from top left corner x goes y down).

The sensor arrangement on scene is shown below. The radar is slightly arranged on the right of lidar and camera is above lidar.

I'll have to investigate more on that and get back to you.

parsep commented 4 years ago

@dejongyeong as long as yaml file matches the patter there is no need changing anything there. The order of circle centers detected shall exactly match the order of centers in the yaml file (I think we discussed it earlier) so ifnot just add code to make the ordering correct.

Good luck eith investigating

dejongyeong commented 4 years ago

@parsep thanks for the feedback. Just a quick question here, how could I check the order of circle centers detected? Thanks.

parsep commented 4 years ago

@dejongyeong never mind, i had a print out f the center points fran image (2d pixel values) and then checked if the order was right by ignoring the z from yaml file as it is 0 and mapping the pixel values to xy from yaml. It's enough to sort the pixel point in the most convenient way for u and adjust the yaml file tto it as ull do it only once

As an example u can find the center of 4 center points CC (in pixel) and sort the vector according to that e.g.: if u v < CC place it in index 0 if u < but v > index 1 and so onn and the yaml file shall follow same order 0,0,0 0,n,0.…

dejongyeong commented 4 years ago

@parsep i’ll try to find that and try look into it. Thanks, will let you know if I have more question. Thanks ;)

dejongyeong commented 4 years ago

@parsep I am looking at this section (https://github.com/tudelft-iv/multi_sensor_calibration/blob/8a66cafd16966c1549463b8a4204aaa41cf3402b/mono_detector/src/lib/detector.cpp#L111) that sorts the center points, I tried modifying around but results does not as expected. The ordering of circle detection changes (output of L107 after sort).

image

Wondering could u send a screenshot of the modified code that you made? Thanks.

parsep commented 4 years ago

@dejongyeong unfortunatly not now, it seems u get a butterfly like order ( when traversing points) i d expect like (or similar) : 9xx 555 9xx 655 10xx 555 10xx 655

And corespondent yaml file being: 0 0 0 0 0.2x 0 0.2x 0 0 0.2x 0.2x 0

Id say it doesn't mean it must be like this bt its more intuitive for me if u have the yaml file like the butterfly u have maybe it works for sure the order u sort the points should match the one i the yaml ( u can sort the yaml file as it suits u)

dejongyeong commented 4 years ago

@parsep Thanks for getting back to me. Yeah, the order seems like a butterfly order. I'm trying to sort in order corresponding to the yaml file and could not sort it in order after trying a few different approach. But the information above might help me clear up some questions.

dejongyeong commented 4 years ago

@parsep just wondering should the ordering matches the radi.push_back as well? in L117? https://github.com/tudelft-iv/multi_sensor_calibration/blob/8a66cafd16966c1549463b8a4204aaa41cf3402b/mono_detector/src/lib/detector.cpp#L117

Thanks.

parsep commented 4 years ago

@dejongyeong the radi if I recall right, had the radius for each circle which wasn't used so it doesn't matter but pls double check this

dejongyeong commented 4 years ago

@dejongyeong that seems reasonable in the first glance yes the x of lidar can corespond to z of camera. Looking into the image and assuming from top left corner x goes righ y down and z in, do u have meaningful object points in the yaml file ? Ofc it shall match the pattern physically, I mean the center spacings.

I recall optimizer having a "reorder" or similar parameter perhaps change that, but i m randomly suggesting this not sure if the problem is something else. Btw try running it with 2 sensors only with out radar if it makes any diff.

From the image u posted above it's hard for me to understand the scene composition so maybe simplify that too, fewer positions, so that u can compare it with reality easier if u think it can help

@parsep could you recall in which line that the optimizer having a "reorder" or similar parameter?

I'm able to sort the circle centers to match correspondences with object points but still the same results. Apart from that, does the results OK if the x values of lidar corresponds to z values of camera? image

parsep commented 4 years ago

@dejongyeong not unfortunately, but try searching for parset or something like that or check ros help for that matter … Regarding ur detections the z and x thing with lidar and camera is fine due to differece in their coordinate systems but i see ur radar detection is off don't u think so? I u trun it off and see results from cam and lidar maybe u can make some reasoning for it.

dejongyeong commented 4 years ago

@parsep i’ll look into that. About the radar detection, I’m using the smartmicro umrr t-153 radar sensor, which is differ from the radar sensor that used by author. I rewrote the output of the radar sensor to match with the recommended output of the radar sensor. (x = x, y=y and z=rcs value). Isn’t that correct?

I tested out with only cam and lidar, is the same results as before.

dejongyeong commented 4 years ago

@RonaldEnsing @parsep

I still unable to figure out what causes the difference of the camera TF (most bottom TF) as shown in the visualization.

image

89221423-2a1f6780-d5cb-11ea-8c00-976d1c6a1745

We tried re-calibrate the instrinsic parameters of the camera and still can't solve the issue. Would like to seek more advice. Thanks.

@abeyang00 Did you able to resolve the issue of using mono-camera?

parsep commented 4 years ago

@dejongyeong I don't have very good clue now unfortunately, cannt u try taking some more positions these are roughly same locations when it comes to distances to the rig so maybe try putting the pattern farther or closer say by couple of meters and see. Btw for the radar that i said now that i looked more I noticed that the z elements seemed off which is probably ok as it's zeroed out sorry if i was misleading What kind of error do u get btw? When it reports the optimizations result?

dejongyeong commented 4 years ago

@parsep I'll have to try that. No worries regarding to the radar.

No error was stated while outputting. The RMSE results shown below

MCPE - RMSE results after lidar config refinement

The velodyne/radar to camera is much higher compared to velodyne to radar.

parsep commented 4 years ago

@dejongyeong well not thatci can say something is wrong here, I don't remember what values I used to see here… Is there any reason u us mcpe if I remember right its minimally connected … y not using fcpe as in fully connected … ?

dejongyeong commented 4 years ago

@parsep I tested on three calibration mode. The reason of using mcpe is that I don't have enough recorded location. The results of FCPE is shown below...

FCPE results after lidar config refinement FCPE - RMSE results after lidar config refinement

The difference that I noticed is on the visualization of the arc representation when comparing to mcpe and fcpe. The arc doesn't looks right when visualize it. And also when visualized both lidar and radar point clouds, the radar pointcloud of the corner reflector doesn't locates on the center of the 4 circles (from rear view) - but looks OK from birdeye view.

dejongyeong commented 3 years ago

Hi @RonaldEnsing, I have been struggling with the monocular vision and we are planning to find an alternative by using stereo vision. I came across that the stereo vision was built with a 2× UI-3060CP Rev. using dense Semi Global Matching. By any chance, would it be possible to explain more in details on how can that be achieved and how is the stereo_msgs/DisparityImage data published, e.g. what information does it subscribed to for processing prior to publishing the data in the stereo_msgs/DisparityImage message type?; also by any chance, would like to know which ROS driver did you used for the camera. Thanks and looking forward to hear from you.

RonaldEnsing commented 3 years ago

We have used the ROS ueye driver [1] and image pipeline [2] for processing the stereo images into disparity images and point clouds.

[1] http://wiki.ros.org/ueye [2] http://wiki.ros.org/image_pipeline

EaiPool commented 3 years ago

The RMSE coming from your optimization seems to be quite small (3.5cm), which suggests to me that the problem is not the optimization step. Instead, I think the issue is the number of calibration board locations: as the documentation states, you should aim for at least 10, but it is better to go higher and potentially remove a few inaccurate ones (see the new example).

dejongyeong commented 3 years ago

Hi @EaiPool, thanks once again for the update.

We are revising the placement of our sensors and will re-try the calibration soon. I will revise if I have any inquiries. Thanks.

fabioreway commented 2 years ago

Hi @dejongyeong, did you solve the problem with the wrong camera position? We are facing a similar issue having the ref mono camera under the radar sensor while doing a mono camera+radar only calibration. Thanks.

dejongyeong commented 2 years ago

Hi @fabioreway, Unfortunately, after trying different approaches, we still couldn't solve the wrong camera position. After deliberation with my supervisor(s), we decided to procure a stereo camera.

fabioreway commented 2 years ago

Thanks for the clarification @dejongyeong.

@RonaldEnsing @parsep has anyone successfully done a mono+radar calibration using this method? It seems that there is an error in the estimation of the mono camera projections, but after some investigation this week we have not yet been able to fix this problem.