Closed oliver-batchelor closed 3 years ago
I've uploaded the half-resolution version of the images from this to here:
https://drive.google.com/drive/folders/1JnoUS4rPdubV-6dh94t69PXC0aiaLa58?usp=sharing
Hi, Thanks for the images -- I did not have access, so I requested it.
The focal-px flag is only if you want to initialize the intrinsic matrix to something other than max dimension *1.2
.
--focal-px=[float] Initial focal length in pixels for the camera. Default is max dimension * 1.2
I'm using the OpenCV calibrateCamera function, with the CALIB_USE_INTRINSIC_GUESS
flag on. Technically, one could go into the code in file camera_calibration.cpp and turn it off.
But I think the takeaway for your scenario is: if you know the value of focal-px -- indicate. Here, you would use --focal-px=4600 .
We might have a little back and forth, because there may be some things I'm not picking up on in your email. Some other things to try would be to turn off all some of the radial distortion parameters and see if the default settings for focal-px get better for your images. Here, leaving the later radial distortion parameters on (--non-zero-k3
--non-zero-tangent
) gives me a bigger RMS.
Finally, if the RMS is still large -- and once I have access to the images I can play around -- I'll see if fiddling with the CALIB_USE_INTRINSIC_GUESS
flag changes anything. If it does, I can push a new version in a week or so as I am working on calico at the moment.
Hi!
Given that I see calico uses the OpenCV, and I can succesfully calibrate intrinsics on the same images (with OpenCV), this seems strange. Will have a closer look..
I've granted you access, sorry about that - I thought they were accessible with the link only.
Cheers, Oliver
Hi,
So I was able to run the dataset you sent over. First thing -- the dataset you sent had missing images. Calico expects that the images acquired will match across folders, so if there is an image '0000.jpg' in cam1, there should be a corresponding image '0000.jpg' in folder cam2. There is no test for this. So I filled in the blanks in the directories with blank images. To be pedantic -- the cameras need to snap the images across all cameras at the same time for each pattern pose, or be synchronized.
Ok, so given that, I was able to get this dataset to calibrate fine. You said that focal pixel length was 4460, and that these were resized by a factor of 2, so that should give 2230. I get 2246.21, 2244.49, 2249.36, 2247.66 , etc. with rms per camera at 0.291874 and 0.332593, which is pretty close. The views were not optimal, so I thought this was ok.
Then, the reprojection error for calibrating the whole set is 9.19083, and reconstruction accuracy error is 0.553381 mm. Reprojection error I think can be improved with better, and more views of the pattern. So moving back from the cameras, and acquiring images from different distances could help.
This is what the 6 camera positions look like with respect to the pattern, at one of its positions.
Here are the results computed on my machine, using
./calico --network --output /home/atabb/DemoData/calico-data/oliver-data/results/ --input /home/atabb/DemoData/calico-data/oliver-data/pattern22x16
(I did fix a little OpenMP bug that caused occasional crashes here, so you might want to clone again. I also tested using the Docker container, and I got identical results.)
Here is the altered dataset to generate those results, with the missing files filled in.
Let me know if you have any questions with the above.
Sorry about the missing files. That's embarrassing - there are no images missing in the original files, the upload to google drive must have been truncated. They were captured with synchronised cameras with 81 images each, but I see in the google drive version some have considerably fewer.
Oddly - I tried the docker container version, and it worked - then went back and recompiled the local version and it also worked (on your images and my original ones), so I'm somewhat confused as to what was wrong before.
Calibrating with the full set seems to work much better - the cameras are on a line (physically they are mounted on a bar), I am curious where you can find the reprojection error and reconstruction accuracy you quoted for the full calibration? I don't seem to see it in the report, but maybe I'm not looking in the right place.
The issue we have with moving farther away is that the focus range is quite limited (the cameras are focused to approximately 800mm), so further away the pattern becomes quite blurry - will experiment though!
Thanks very much for your reply, Oliver
On Thu, Dec 17, 2020 at 9:49 AM Amy Tabb notifications@github.com wrote:
Hi,
So I was able to run the dataset you sent over. First thing -- the dataset you sent had missing images. Calico expects that the images acquired will match across folders, so if there is an image '0000.jpg' in cam1, there should be a corresponding image '0000.jpg' in folder cam2. There is no test for this. So I filled in the blanks in the directories with blank images. To be pedantic -- the cameras need to snap the images across all cameras at the same time for each pattern pose, or be synchronized.
Ok, so given that, I was able to get this dataset to calibrate fine. You said that focal pixel length was 4460, and that these were resized by a factor of 2, so that should give 2230. I get 2246.21, 2244.49, 2249.36, 2247.66 , etc. with rms per camera at 0.291874 and 0.332593, which is pretty close. The views were not optimal, so I thought this was ok.
Then, the reprojection error for calibrating the whole set is 9.19083, and reconstruction accuracy error is 0.553381 mm. Reprojection error I think can be improved with better, and more views of the pattern. So moving back from the cameras, and acquiring images from different distances could help.
This is what the 6 camera positions look like with respect to the pattern, at one of its positions. [image: snapshot00] https://user-images.githubusercontent.com/25328047/102402998-a1222f00-3fb3-11eb-85d4-bf9b96585600.png
Here https://drive.google.com/file/d/1kE8JSyM35QerqnOeX8EoiwjoAS8w5h5e/view?usp=sharing are the results computed on my machine, using
./calico --network --output /home/atabb/DemoData/calico-data/oliver-data/results/ --input /home/atabb/DemoData/calico-data/oliver-data/pattern22x16
(I did fix a little OpenMP bug that caused occasional crashes here, so you might want to clone again. I also tested using the Docker container, and I got identical results.)
Here https://drive.google.com/file/d/1FHOzOxAz_wP8pB1zoslGCJJXQe3ArEC2/view?usp=sharing is the altered dataset to generate those results, with the missing files filled in.
Let me know if you have any questions with the above.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/amy-tabb/calico/issues/3#issuecomment-746992762, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAITRZJ5RK4ZL4FUVXRW7I3SVEMM7ANCNFSM4UP26WAA .
I see these values are probably the ones from report/total_results.txt.. incremental: Reprojection error, rrmse: 12.8606
incremental: RAE w/ BA, average, stddev, median : 0.140748, 0.108776, 0.11688 SQRT -- RAE w/ BA, average, stddev, median : 0.375163, 0.329812, 0.341878
So this has a worse reprojection error with more images, but the mean reconstruction error of the 3d points is 0.14mm?
Reprojection error, per foundational relationship. One of these values is 6599 - does this mean one of the frames in particular has outliers/bad detections perhaps?
On Thu, Dec 17, 2020 at 7:22 PM Oliver Batchelor saulzar@gmail.com wrote:
Sorry about the missing files. That's embarrassing - there are no images missing in the original files, the upload to google drive must have been truncated. They were captured with synchronised cameras with 81 images each, but I see in the google drive version some have considerably fewer.
Oddly - I tried the docker container version, and it worked - then went back and recompiled the local version and it also worked (on your images and my original ones), so I'm somewhat confused as to what was wrong before.
Calibrating with the full set seems to work much better - the cameras are on a line (physically they are mounted on a bar), I am curious where you can find the reprojection error and reconstruction accuracy you quoted for the full calibration? I don't seem to see it in the report, but maybe I'm not looking in the right place.
The issue we have with moving farther away is that the focus range is quite limited (the cameras are focused to approximately 800mm), so further away the pattern becomes quite blurry - will experiment though!
Thanks very much for your reply, Oliver
On Thu, Dec 17, 2020 at 9:49 AM Amy Tabb notifications@github.com wrote:
Hi,
So I was able to run the dataset you sent over. First thing -- the dataset you sent had missing images. Calico expects that the images acquired will match across folders, so if there is an image '0000.jpg' in cam1, there should be a corresponding image '0000.jpg' in folder cam2. There is no test for this. So I filled in the blanks in the directories with blank images. To be pedantic -- the cameras need to snap the images across all cameras at the same time for each pattern pose, or be synchronized.
Ok, so given that, I was able to get this dataset to calibrate fine. You said that focal pixel length was 4460, and that these were resized by a factor of 2, so that should give 2230. I get 2246.21, 2244.49, 2249.36, 2247.66 , etc. with rms per camera at 0.291874 and 0.332593, which is pretty close. The views were not optimal, so I thought this was ok.
Then, the reprojection error for calibrating the whole set is 9.19083, and reconstruction accuracy error is 0.553381 mm. Reprojection error I think can be improved with better, and more views of the pattern. So moving back from the cameras, and acquiring images from different distances could help.
This is what the 6 camera positions look like with respect to the pattern, at one of its positions. [image: snapshot00] https://user-images.githubusercontent.com/25328047/102402998-a1222f00-3fb3-11eb-85d4-bf9b96585600.png
Here https://drive.google.com/file/d/1kE8JSyM35QerqnOeX8EoiwjoAS8w5h5e/view?usp=sharing are the results computed on my machine, using
./calico --network --output /home/atabb/DemoData/calico-data/oliver-data/results/ --input /home/atabb/DemoData/calico-data/oliver-data/pattern22x16
(I did fix a little OpenMP bug that caused occasional crashes here, so you might want to clone again. I also tested using the Docker container, and I got identical results.)
Here https://drive.google.com/file/d/1FHOzOxAz_wP8pB1zoslGCJJXQe3ArEC2/view?usp=sharing is the altered dataset to generate those results, with the missing files filled in.
Let me know if you have any questions with the above.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/amy-tabb/calico/issues/3#issuecomment-746992762, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAITRZJ5RK4ZL4FUVXRW7I3SVEMM7ANCNFSM4UP26WAA .
Interestingly, when run at full resolution both reprojection error and reconstruction error is much lower. (I was expecting reprojection error to be higher to scale with resolution..).
Reprojection error incremental: Reprojection error, rrmse: 2.95055
Reconstruction accuracy error (RAE). Note that these values are squared (equations 18, 19) Number valid image points 315 incremental: RAE w/ BA, average, stddev, median : 0.0561485, 0.0578827, 0.0416457 SQRT -- RAE w/ BA, average, stddev, median : 0.236957, 0.240588, 0.204073
On Thu, Dec 17, 2020 at 7:34 PM Oliver Batchelor saulzar@gmail.com wrote:
I see these values are probably the ones from report/total_results.txt.. incremental: Reprojection error, rrmse: 12.8606
incremental: RAE w/ BA, average, stddev, median : 0.140748, 0.108776, 0.11688 SQRT -- RAE w/ BA, average, stddev, median : 0.375163, 0.329812, 0.341878
So this has a worse reprojection error with more images, but the mean reconstruction error of the 3d points is 0.14mm?
Reprojection error, per foundational relationship. One of these values is 6599 - does this mean one of the frames in particular has outliers/bad detections perhaps?
On Thu, Dec 17, 2020 at 7:22 PM Oliver Batchelor saulzar@gmail.com wrote:
Sorry about the missing files. That's embarrassing - there are no images missing in the original files, the upload to google drive must have been truncated. They were captured with synchronised cameras with 81 images each, but I see in the google drive version some have considerably fewer.
Oddly - I tried the docker container version, and it worked - then went back and recompiled the local version and it also worked (on your images and my original ones), so I'm somewhat confused as to what was wrong before.
Calibrating with the full set seems to work much better - the cameras are on a line (physically they are mounted on a bar), I am curious where you can find the reprojection error and reconstruction accuracy you quoted for the full calibration? I don't seem to see it in the report, but maybe I'm not looking in the right place.
The issue we have with moving farther away is that the focus range is quite limited (the cameras are focused to approximately 800mm), so further away the pattern becomes quite blurry - will experiment though!
Thanks very much for your reply, Oliver
On Thu, Dec 17, 2020 at 9:49 AM Amy Tabb notifications@github.com wrote:
Hi,
So I was able to run the dataset you sent over. First thing -- the dataset you sent had missing images. Calico expects that the images acquired will match across folders, so if there is an image '0000.jpg' in cam1, there should be a corresponding image '0000.jpg' in folder cam2. There is no test for this. So I filled in the blanks in the directories with blank images. To be pedantic -- the cameras need to snap the images across all cameras at the same time for each pattern pose, or be synchronized.
Ok, so given that, I was able to get this dataset to calibrate fine. You said that focal pixel length was 4460, and that these were resized by a factor of 2, so that should give 2230. I get 2246.21, 2244.49, 2249.36, 2247.66 , etc. with rms per camera at 0.291874 and 0.332593, which is pretty close. The views were not optimal, so I thought this was ok.
Then, the reprojection error for calibrating the whole set is 9.19083, and reconstruction accuracy error is 0.553381 mm. Reprojection error I think can be improved with better, and more views of the pattern. So moving back from the cameras, and acquiring images from different distances could help.
This is what the 6 camera positions look like with respect to the pattern, at one of its positions. [image: snapshot00] https://user-images.githubusercontent.com/25328047/102402998-a1222f00-3fb3-11eb-85d4-bf9b96585600.png
Here https://drive.google.com/file/d/1kE8JSyM35QerqnOeX8EoiwjoAS8w5h5e/view?usp=sharing are the results computed on my machine, using
./calico --network --output /home/atabb/DemoData/calico-data/oliver-data/results/ --input /home/atabb/DemoData/calico-data/oliver-data/pattern22x16
(I did fix a little OpenMP bug that caused occasional crashes here, so you might want to clone again. I also tested using the Docker container, and I got identical results.)
Here https://drive.google.com/file/d/1FHOzOxAz_wP8pB1zoslGCJJXQe3ArEC2/view?usp=sharing is the altered dataset to generate those results, with the missing files filled in.
Let me know if you have any questions with the above.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/amy-tabb/calico/issues/3#issuecomment-746992762, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAITRZJ5RK4ZL4FUVXRW7I3SVEMM7ANCNFSM4UP26WAA .
I found at least one offending image, in report/data/cam1/ext52.png - would this be caused by a bad charuco detection?
Hi,
Ok, so glad you got it to work. Going back to last month's issue with LAPACK, I'm wondering if you re-pull this repo / docker image and try to run on the 1300 images, how things will go. As to why ... I don't know. The minimization is more efficient in the last update.
Going backwards to comment4-- good catch, yes the aruco tag was misidentified. I have had that happen very occasionally. It is likely throwing off the estimation some, if you have enough data it will just be reflected in error. But you could place a big black rectangle over the part of the image with the misidentified tag. If you don't mind sending me the original image, I might add it to my problem dataset and over 6 months create some tests to validate the IDs.
Then where to find the information - -planning to update the README, because in hindsight this is not obvious.
individual intrinsic camera calibration info, per camera: output-directory/data/cam#/cali_results.txt
rms 0.291874
is the reprojection root mean square error from camera calibration for intrinsic camera calibration parameters using OpenCV, nothing fancy.
Then to find the error of the multi-camera calibration using calico, see output-directory/total_results.txt
.
Algebraic error cost function error, averaged by number of FRs (equation 16)
is equation 16 on page 5 of the current document on arXiv (note the code is ahead of the document at the moment) paper.
incremental: Reprojection error, rrmse:
is equation 17 in the paper. So this is reprojection error if we use the calibration from this multi-camera calibration that we computed with calico.
Then RAE
is reconstruction accuracy error
, equations 18 and 19, or section 5.1.3 in the paper. Units are mm, and what happens is: take the calibration computed by calico, and all of the corners detected on the patterns. Reconstruct the corners in 3D space. Then, since the location of the pattern is known (we set it to the world coordinate system to start the calibration in the first place), calculate the distance between the reconstructed 3D points and ideal 3D points. Report average standard dev, and median. During the first draft of the paper, I had some unknown issues in the data (similar to your bad detections, but yet different), and the average was really bad. So I also reported the median. Now the average is comparable to the median.
FINALLY to read off the calibration as a result of the multi-camera calibration, go to output-directory/cameras-incremental/variables.txt
The first variables will be the camera variables (extrinsic matrices). You would grab the intrinsic information from the data
folder as described above. You can also see in this folder the reprojection of the corners as a result of the multi-camera calibration, per equation. I have it set that the default maximum number of corners used is 10 (the points with circles are the ones chosen for the minimization), but since your pattern is closer to the camera, you may want to use more. flag --k=20
, say). As you found, in the data/cam#/ext#.png
images, it shows the corners detected and reprojection results from the per-camera calibration.
That was a lot, but thanks for sending me some data and letting me know how this works for real people out there :) Let me know how it goes.
Best A
And Google announced some changed to Google Photos -- it might be that your 'low interest' photos were pruned away. I couldn't find the right reference for this. Anyway, a zip file might be the way to get around this in the future.
Distance from the camera -- I have also noticed that what seems blurry to me sometimes does fine for computer vision tools. Some of them apply Gaussian filters to start anyway.
Thanks for all this - I have discovered some interesting things since this last correspondence.
1) Camera 4 in the data I sent you (aside from missing half the frames) is almost certainly out of sync (how this happened we're still not sure!)
2) Rolling shutter is affecting the data from these cameras sometimes, leading to less impressive reprojection error - if this actually degrades the calibration much (when most camera shots are static-enough), it's hard to know.
First I have implemented most of the algorithm in your paper (which was very simple and easy to follow), I've been using python and scipy least-squares which seems to work well (perhaps not as fast as Ceres). (It's at github.com/saulzar/multical - not documented at all currently - so sorry that I have not credited you yet!). I've been playing around with some rolling shutter comp/motion models which seems promising, and have some interactive UI for visualising the results and tools for comparing between two different calibrations.. which have been time consuming, but quite valuable for debugging. If I get time it would be good to import calico calibrations.
I also have a couple of questions:
Are the datasets in your paper available anywhere? (found them)
I would be curious if I could verify the algorithm that we get the similar results. My initialisation I haven't yet added the hand-eye mode yet, but the overlapping ones should be good to go. I had more luck with a clustering/averaging algorithm to get relative poses than the least-squares method you cited - it seems to need a scale normalisation (it uses rotations and translations - but they're not always of the same magnitude if you use mm compared to meters).
The second one might be obvious... but in your paper title what is the "asynchronous" part? I think that out of sync camera calibration is possible but this doesn't seem to be the context, does it refer to cameras which don't overlap views?
Thanks! Oliver
Hi Oliver,
Glad you found the datasets.
The second one might be obvious... but in your paper title what is the "asynchronous" part? I think that out of sync camera calibration is possible but this doesn't seem to be the context, does it refer to cameras which don't overlap views?
Ok, the async part was the initial framing of the paper. I do calibrate async cameras. To do so, place the calibration object, run program to acquire images from all cameras. Place object in a new location. Press button to acquire images. Repeat about 20 times, you're done. That's why this method does not need 1,000 images. The images themselves are synced - b/c the object is still -- but the cameras are not. I am using cheap webcameras, it takes about 2 minutes to grab all of the images I need with this approach.
Hope that makes sense.
For citing this work, at this stage since your repo is public, you could put "reimplementation of https://github.com/amy-tabb/calico" in your README or something similar, that would do the trick.
Best A
Resolved this over email.
Hi!
I've been trying to use calico to calibrate our 6 camera system - however the most difficulty I'm having is getting it to calibrate intrinsics. Below is an example, I have specified --focal-px=3000, yet it will only move a tiny amount from this '3000' (and has a large RMS), the same applies for any other --focal-px or the default (4800 if not specified) (the true value is approximately 4460).
I'm using a 22x16 charuco board, which I have checked is exactly like the one calico expects.