tier4 / CalibrationTools

GNU General Public License v3.0
98 stars 35 forks source link

[lidar2camera]Discarding apriltag: size 13.2729 px. Expecting at least 157.194 px - New test scenario questions #137

Closed zymouse closed 6 months ago

zymouse commented 9 months ago

Write in front

image

Description of the problem

There is a big difference between the camera detection visualization results and the Lidar detection point visualization results in rviz2 [into the lower panel]

image

eventually

Thank you very much, this is a great calibration tool!

knzo25 commented 9 months ago

@zymouse Thank you for using our tool.

The Discarding apriltag: size 13.2729 px. Expecting at least 157.194 px message happens when there are detections of just a few pixels (in this case 13.2729), so I am assuming that is not your question this time (that warning itself will be removed soonish since there are better ways to filter, and should not be a warning in the first place).

With regards to the problem you describe, could you collect more points (i.e., move the target no another location)? While PNP can be done with the 4 corners of a single tag, we use at least 3 detections for a decent estimate, and the launcher itself specifies 9 for convergence.

The most probable scenario is that your initial extrinsics are quite off, hence the projection being quite far (the calibrated pose has not yet been computed)

zymouse commented 7 months ago

Thank you very much for your answer, as I found out in the course of using it. The center calibration of the camera FOV will be more accurate, but the camera FOV is off to the naked eye on the left and right sides of the camera FOV rviz2:

image

Interactive camera-lidar calibration tool:

image

My board size is: 1m What is your board size? (The whole board and the image inside)

knzo25 commented 7 months ago

Our board is 0.8m in total. In you see the complete tag as a 8x8 pixel grid, each pixel is 10cm One of the problems with the configurations is the fact that the image detector finds the black frame, whereas the lidar detector finds the white frame so there are adaptations in the launchers and code

zymouse commented 7 months ago

Thank you very much for your answer, maybe it's my board, my board is not 10cm per pixel, I'll reprint a piece to try it out

knzo25 commented 7 months ago

Please let us know how it goes, so we can keep looking for an answer in case the problem persists

zymouse commented 6 months ago

It's much better to reprint the calibration plate according to the standard

image

But saving tf.json after using the save button in interactive_calibrator and filling in sensor_kit_calibration.yaml gives much worse results.

image image

On the interactive_calibrator visualization software, it works fine, but after clicking the save button, the saved external references are terrible. I've updated the code and tested it

image
zymouse commented 6 months ago
image

My tf tree matches this.

knzo25 commented 6 months ago

Hi, I see that you have a frame called camera_top_link. As far as I know, that is not a standard name. Usually there is the /../camera_link and /.../camera_optical_link. As such I can't really know how things are going on your end.

Calibration itself it between the lidar frame and the optical link, but this tool was overfitted to a design that outlived its use. Internally, we are required to redesign many things and have things merged by the end of January, which would make things easier for everyone.

In the meantime, If during calibration, the visualization looks good, but for some reason when things are saved, they do not correspond with what you were expecting, I would recommend taking a look at the /tf_static topic that the calibrators publish. Those are the values that make sense, and you should be able to replicate it later. Additionally, if you want different tfs, you can always run ros2 run tf2_ros tf2_echo parent child

zymouse commented 6 months ago

Okay, thank you very much.

Yesterday I ran into a problem: when the camera and lidar's external parameters are very different, it causes the interactive_calibrator GUI to crash and exit! Solve this problem by adjusting the external reference

[interactive_calibrator-7] /home/pixbus/pix/robobus/robobus-calibration/install/extrinsic_interactive_calibrator/lib/python3.10/site-packages/extrinsic_interactive_calibrator/image_view.py:704: RuntimeWarning: libshiboken: Overflow: Value -14317449972 exceeds limits of type  [signed] "i" (4bytes).
[interactive_calibrator-7]   painter.drawLine(
[interactive_calibrator-7] OverflowError
[interactive_calibrator-7] /home/pixbus/pix/robobus/robobus-calibration/install/extrinsic_interactive_calibrator/lib/python3.10/site-packages/extrinsic_interactive_calibrator/image_view.py:704: RuntimeWarning: libshiboken: Overflow: Value -722850027.5274026 exceeds limits of type  [signed] "i" (4bytes).
[interactive_calibrator-7]   painter.drawLine(
[interactive_calibrator-7] QPaintDevice: Cannot destroy paint device that is being painted
knzo25 commented 6 months ago

Hi, due to the nature of projective geometry, it is not uncommon to have NaN or inf values when the input geometry is wrong.

In our experiments, we always start with very good values from the CAD files of our vehicles so we have not experienced this. I will be sure to add this phenomenon to our list of things to check for our next release next month, but if you want to be sure we fix the problem for your use case (including the previous tf error) we would need access to the data to reproduce the error.

zymouse commented 6 months ago

interactive_calibrator tf saving issues

When can_publish_tf is false, the published tf is correct after clicking calibrate. When can_publish_tf is true, the published tf is incorrect after clicking calibration.

image

Recordings of different focal length camera tests:

Conclusion: the smaller the focal length, the worse the results on both sides of the camera FOV. Whether the camera internal reference calibration is relevant

The problem, and I know why, is that the rviz and interactive_calibrator GUI projection methods. It causes the 6mm results to be different. Because after we calibrate, we uniformly use rviz to validate the results

zymouse commented 6 months ago

Thank you very much,

`Interactive calibrator' GUI, white screen problem

when I use image_raw input to the interactive_calibrator GUI, odds are no image shows up. It only works if I keep restarting. This is more common.

image
knzo25 commented 6 months ago

Hi, just to check. We have used lenses from 30, 60, 85, 90, and 120 degrees without any of the issues you mention (for almost each camera, we try both manual lens undistortion and harware-based undistortion).

The publish_tf problem you mention is probably due to some configuration issue, but when you are calibrating using lidartags that option should be disabled in the UI (that UI is actually another method altogether, but I launch it because it provides a decent visualization).

Can you post here the intrinsics you are using for this calibration? Just in case, if you try to use rviz's camera overlay (camera + pointcloud) assumes that the image is undistorted (e.g., is not the image raw but the output of the image pipeline). If you do otherwise, the visualizations will not appear correctly.

zymouse commented 6 months ago

image_raw

image_width: 1920
image_height: 1080
camera_name: gmsl
camera_matrix:
  rows: 3
  cols: 3
  data: [1921.51665, 0.0, 954.38261, 0.0, 1922.85157, 515.01868, 0.0, 0.0, 1.0]
distortion_model: plumb_bob
distortion_coefficients:
  rows: 1
  cols: 5
  data: [-0.52304, 0.23345, -0.00128, -9e-05, 0.0]
projection_matrix:
  rows: 3
  cols: 4
  data: [1624.82727, 0.0, 951.27933, 0.0, 0.0, 1836.88538, 511.47128, 0.0, 0.0, 0.0,
    1.0, 0.0]
rectification_matrix:
  rows: 3
  cols: 3
  data: [1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0]
knzo25 commented 6 months ago

Are you trying to visualize the pointcloud over the image_raw directly in rviz?

zymouse commented 6 months ago
image image

Yes, I've visualized it on rviz

knzo25 commented 6 months ago

Well, that is the reason you can not see good results on rviz. Rviz expects rectified images as internally only reads the P matrix to overlay poinclouds and markers on the image (instead of K and d). Since your image is distorted (more and more the lower the focal distance), you need to rectify it first before passing it to rviz (or you can just use our UI)

zymouse commented 6 months ago

Thank you very much, I used the corrected image and it worked!

image
knzo25 commented 6 months ago

@zymouse If you are satisfied with this issue, please remember to close it :)