Open mcm001 opened 10 months ago
Hi. I'm happy to debug this type of stuff. Any .cameramodel file I produce contains all the inputs, and I should be able to figure out what went wrong, if you send me a copy. In a perfect world, nothing should ever fail in a non-obvious way, and I'd like to have mrcal be closer to that.
Sure, the dataset is pretty terrible (photos are taken at the same angle with little variation between them). I can't reproduce this on WSL using mrcal version 2.4.1-1jammy1 and the command mrcal-calibrate-cameras --corners-cache corners.vnl --lensmodel LENSMODEL_OPENCV8 '*.png' --focal 1200 --object-width-n 7 --object-spacing 0.0254
; I'll try again on my Ubuntu install later. In this case the model was created by Photon via our JNI, so I don't get the cameramodel file. It's worth noting that the corners.vnl
file was also created using Photon, not mrgingham.
Found your problem: the detector is producing inconsistent results. I pulled out the corner detections, and plotted them overlaid onto the image they came from. The zsh session, for two arbitrary inconsistent images:
$ for f (img0.png img10.png) { < corners.vnl vnl-filter "filename==\"$f\"" -p x,y | feedgnuplot --square --domain --image $f --with 'linespoints pt 7 ps 2 lw 2' --hardcopy ${f:r}-detections.${f:e} }
Wrote output to img0-detections.png
Wrote output to img10-detections.png
Looks like this:
Note that in one image the corners are reports row-first, and in the other image they're column-first. Either one is fine, but it must be consistent: the i-th corner in EVERY board detection must represent the same physical point, and this is violated in this dataset.
mrgingham always returns data from the top-left of the imager, then completing the row to the right, then moving to the next row, and so on. So mrgingham would have made this work:
$ for f (img0.png img10.png) { mrgingham --gridn 7 $f | vnl-filter -p x,y | feedgnuplot --square --domain --image $f --with 'linespoints pt 7 ps 2 lw 2' --hardcopy ${f:r}-detections-mrgingham.${f:e} }
Wrote output to img0-detections-mrgingham.png
Wrote output to img10-detections-mrgingham.png
However, do note that mrgingham will start at the top-left of the IMAGE, which it assumes is the top-left of the BOARD, which may not be the case. If you give mrgingham some images that are upside-down, it won't be able to tell the difference, and you might get errors. So it's strongly recommended to vary the pitch and yaw of the chessboard, but leave the roll mostly at 0. Relatedly, if you have a stereo pair where one of the cameras is upside, down, you should flip the corner order for mrgingham results for that camera.
There's a tool to rotate corner detections that you can use to handle upside-down cameras. Or to fix the detections in this dataset (if you don't want to re-run mrgingham). See: https://mrcal.secretsauce.net/recipes.html#calibrating-upside-down
Ah good find! I think this is just us re-discovering why you made mrgingham. We currently just use the stock opencv corner detector, and don't do anything in particular to guarantee the order. I have an open somewhat dead PR to use mrgingham instead, too.
https://github.com/PhotonVision/photonvision/assets/29715865/0f045168-e585-48d6-ac58-585243cb752f
On the actual theory of what's going on, why does this flipping ruin the dataset? It seems like in the horizontal stripe configuration corner number one is at the top left and the pattern continues to the right, vs with the vertical stripes corner one is in the bottom left with the pattern continuing vertically up. This should just correspond to rotation of 90 degrees about the chessboard normal right?
On the actual theory of what's going on, why does this flipping ruin the dataset? It seems like in the horizontal stripe configuration corner number one is at the top left and the pattern continues to the right, vs with the vertical stripes corner one is in the bottom left with the pattern continuing vertically up. This should just correspond to rotation of 90 degrees about the chessboard normal right?
This is definitely A problem, but it might not be THE problem:
If you mis-identify the corners, the board shape modeling becomes nonsensical. The board is mostly flat, so you mostly have symmetry, and this would cause small accuracy problems, not catastrophic failures.
Assuming the shape is flat, you can compensate for the incorrect ordering with an extra transform, as you say. But it's up to your seeding algorithm to compute and apply that transform. Does it do that correctly? Are you sure?
So it might still work despite the inconsistent ordering, but at the very least it's hard to think about. Since as an experiment it's easy to run mrgingham instead, you can eliminate this source of errors. I'd try that to see if it solves your problems. If not, you know the issue lies elsewhere, and you can then debug more easily.
With OV9281 + Pi 5, pictures taken all at the same angle produces wildly wrong calibration outputs. See attached two datasets for reference. I'm going to just assume that bad calibration inputs made the solver go kaboom.
Also at 1280x720, mrcal calibration with photon crashed at ~65 pictures with an ineffable libc sigsev exception. We were not able to reproduce this.
photon_calibration_Right_1280x800_2.json photon_calibration_Right_1280x800_1.json