PhotonVision / photonvision

PhotonVision is the free, fast, and easy-to-use computer vision solution for the FIRST Robotics Competition.
https://photonvision.org
GNU General Public License v3.0
279 stars 197 forks source link

Mrcal produces wildly nonsensical outputs for some calibration inputs #1120

Open mcm001 opened 10 months ago

mcm001 commented 10 months ago

With OV9281 + Pi 5, pictures taken all at the same angle produces wildly wrong calibration outputs. See attached two datasets for reference. I'm going to just assume that bad calibration inputs made the solver go kaboom.

Also at 1280x720, mrcal calibration with photon crashed at ~65 pictures with an ineffable libc sigsev exception. We were not able to reproduce this.

photon_calibration_Right_1280x800_2.json photon_calibration_Right_1280x800_1.json

dkogan commented 8 months ago

Hi. I'm happy to debug this type of stuff. Any .cameramodel file I produce contains all the inputs, and I should be able to figure out what went wrong, if you send me a copy. In a perfect world, nothing should ever fail in a non-obvious way, and I'd like to have mrcal be closer to that.

mcm001 commented 8 months ago

Sure, the dataset is pretty terrible (photos are taken at the same angle with little variation between them). I can't reproduce this on WSL using mrcal version 2.4.1-1jammy1 and the command mrcal-calibrate-cameras --corners-cache corners.vnl --lensmodel LENSMODEL_OPENCV8 '*.png' --focal 1200 --object-width-n 7 --object-spacing 0.0254; I'll try again on my Ubuntu install later. In this case the model was created by Photon via our JNI, so I don't get the cameramodel file. It's worth noting that the corners.vnl file was also created using Photon, not mrgingham.

bad_cal_dataset.zip

dkogan commented 8 months ago

Found your problem: the detector is producing inconsistent results. I pulled out the corner detections, and plotted them overlaid onto the image they came from. The zsh session, for two arbitrary inconsistent images:

$ for f (img0.png img10.png) { < corners.vnl vnl-filter "filename==\"$f\"" -p x,y | feedgnuplot --square --domain --image $f --with 'linespoints pt 7 ps 2 lw 2' --hardcopy ${f:r}-detections.${f:e} }

Wrote output to img0-detections.png
Wrote output to img10-detections.png

Looks like this:

img0-detections img10-detections

Note that in one image the corners are reports row-first, and in the other image they're column-first. Either one is fine, but it must be consistent: the i-th corner in EVERY board detection must represent the same physical point, and this is violated in this dataset.

mrgingham always returns data from the top-left of the imager, then completing the row to the right, then moving to the next row, and so on. So mrgingham would have made this work:

$ for f (img0.png img10.png) { mrgingham --gridn 7 $f | vnl-filter -p x,y | feedgnuplot --square --domain --image $f --with 'linespoints pt 7 ps 2 lw 2' --hardcopy ${f:r}-detections-mrgingham.${f:e} }

Wrote output to img0-detections-mrgingham.png
Wrote output to img10-detections-mrgingham.png

img0-detections-mrgingham img10-detections-mrgingham

However, do note that mrgingham will start at the top-left of the IMAGE, which it assumes is the top-left of the BOARD, which may not be the case. If you give mrgingham some images that are upside-down, it won't be able to tell the difference, and you might get errors. So it's strongly recommended to vary the pitch and yaw of the chessboard, but leave the roll mostly at 0. Relatedly, if you have a stereo pair where one of the cameras is upside, down, you should flip the corner order for mrgingham results for that camera.

dkogan commented 8 months ago

There's a tool to rotate corner detections that you can use to handle upside-down cameras. Or to fix the detections in this dataset (if you don't want to re-run mrgingham). See: https://mrcal.secretsauce.net/recipes.html#calibrating-upside-down

mcm001 commented 8 months ago

Ah good find! I think this is just us re-discovering why you made mrgingham. We currently just use the stock opencv corner detector, and don't do anything in particular to guarantee the order. I have an open somewhat dead PR to use mrgingham instead, too.

https://github.com/PhotonVision/photonvision/assets/29715865/0f045168-e585-48d6-ac58-585243cb752f

On the actual theory of what's going on, why does this flipping ruin the dataset? It seems like in the horizontal stripe configuration corner number one is at the top left and the pattern continues to the right, vs with the vertical stripes corner one is in the bottom left with the pattern continuing vertically up. This should just correspond to rotation of 90 degrees about the chessboard normal right?

dkogan commented 8 months ago

On the actual theory of what's going on, why does this flipping ruin the dataset? It seems like in the horizontal stripe configuration corner number one is at the top left and the pattern continues to the right, vs with the vertical stripes corner one is in the bottom left with the pattern continuing vertically up. This should just correspond to rotation of 90 degrees about the chessboard normal right?

This is definitely A problem, but it might not be THE problem:

So it might still work despite the inconsistent ordering, but at the very least it's hard to think about. Since as an experiment it's easy to run mrgingham instead, you can eliminate this source of errors. I'd try that to see if it solves your problems. If not, you know the issue lies elsewhere, and you can then debug more easily.