acfr / cam_lidar_calibration

(ITSC 2021) Optimising the selection of samples for robust lidar camera calibration. This package estimates the calibration parameters from camera to lidar frame.
Apache License 2.0
436 stars 102 forks source link

Question about camera undistortion in non-fisheye cases #35

Closed juliangaal closed 1 year ago

juliangaal commented 1 year ago

Hi, thank you for this great package! I have gotten good initial results.

I was wondering why the image only is explicitely undistorted in case distortion model = fisheye? Given, let's say, OpenCVs default distortion model, no undistortion would be applied to an incoming image, which can quite significantly alter results, especially when using low cost cameras.

darrenjkt commented 1 year ago

Hi thanks for your interest. In our group, we just haven't had to use this repo for non-fisheye cameras. If you have other types of distortion, you'd have to add some code to undistort it first.

juliangaal commented 1 year ago

thanks for the quick reply.

What are your plans for this repo? I am asking because of #33 and more changes I am working on in my branch for added safety and user friendliness. Are you interested in PRs?

The main difference in my branch is usability:

darrenjkt commented 1 year ago

Thanks, the PR looks really useful! We are currently doing some improvements internally that we have not pushed to the main branch yet. We plan to do that soon and concurrently review your PR. We'll keep you updated

juliangaal commented 1 year ago

Nice. With "internally", are you speaking of changes to the minimization problem or just general structure, code smell, etc?

jclinton830 commented 1 year ago

@juliangaal the internal changes @darrenjkt is talking about are in the calib-v2 branch.

The changes made here are mainly to introduce some level of user-friendliness to the way how the chess board is extracted and the dimensions are registered. In the previous method, the feature extraction sliders had to be used to create a very fine filter to filter out everything other than the board.

In calib-v2 we have more of a hybrid approach. You have to use the feature extraction sliders to create a filter (can be as big as you want). This scene needs to be a static scene with no moving items. Then you capture this static scene without the chess board using the capture background button. After you capture this static scene, you place the chess board, facing the camera and lidar as centred and perpendicular as possible to the sensors and the ground respectively. You should see background subtraction occurring in every loop of the program that detects every new detail that was previously not in the static scene. Then you take your first sample of the chessboard using the capture sample button.

The capture sample routine has been improved by taking 5 consecutive frames after capture is done and taking the running average of the board sample's parameters. You must repeat the capture process multiple times till you think you have a good amount of data. You can then perform the optimise process by pressing the optimise button which will generate the calibration parameters.

In addition to this, I have reformatted a lot of files using Clang for consistency and have added pre-commit to the project. Hence you will see a lot of files have been changed in this branch.

I have also fixed some issues in the Python script visualise_results.py which was to avoid sudden fluctuation of angles when close to +/-2Pi.

These changes were requested by one of our own members - @chinitaberrio and were implemented by me.

With regard to your PR, this may have conflicts with calib-v2 for sure since I have reformatted most of the files in the package. I noticed that your changes in #33 are not big changes, any chance you can add these to calib-v2? @chinitaberrio is meant to test calib-v2 but she has been very busy so I am not sure if she has had time to do this. After she is happy with the changes we can merge this into master.

If you have any questions keep commenting here. Thanks.

juliangaal commented 1 year ago

This sounds like a very nice improvement! In my dev branch, I introduced mainly the changes mentioned here, but I guess most won't be compatible with v2. But that's fine! I have gotten very good results with v1.

The main issue for my setup that I see: I am not mounting the board to anything, I am holding it, and have to adjust the filter parameters very carefully. Without a fixed mounting position, using consecutive scans may introduce too much noise. But works well enough as a proof on concept

A couple questions:

jclinton830 commented 1 year ago

@juliangaal apologies I did not realise your setup was handheld.

Yes, we can provide you with some of our data. We are setting up some new lidars on one of our vehicles this week and will collect some new data. So bear with me for a few days.

I am actually very new to this project so I haven't got a lot of experience with the results of this package. After testing it briefly, @chinitaberrio did mention that V2 gave much better results. However, we believe that it needs to be further tested. We will be testing the new updates in the next couple of weeks and will let you know how we are going.

The reason we still use the feature extraction UI is to create a scene where the point cloud is static. This is because we work in a small section of a big lab and there is always a lot going on like people walking, robots moving etc. The feature extraction UI lets us create a small 5 x 5 x 4 m view that is static. However, you are right in your case where you are using a handheld setup this becomes difficult.

Yes, I will post a photo of our board tomorrow. I am not in the lab today.

jclinton830 commented 1 year ago

@juliangaal Here are a couple of pictures of our board and tripod. The tripod has an added custom swivel mechanism to add an additional degree of freedom.

IMG_20230607_103557 IMG_20230607_103608

juliangaal commented 1 year ago

Thank you for those great pictures! Definitely gives me ideas for mounting