Closed masakiyamamotobb closed 1 year ago
Hi @masakiyamamotobb ,
Thanks. Is the link correct? The bagfile's name in the link you sent is data_2023-04-14-08-40-36.bag, which is the name of the bag I had before ...
I'm sorry. This will work:
https://drive.google.com/file/d/1UIIYgOyZhswK1UHJSKbnRwQNO1NWuOtV/view?usp=share_link
Finally I have calibrated two cameras with calib.io's software (https://calib.io/products/calib).
In conclusion, ros camera_info and calibration results are mostly the same. So I'm going to leave them as it is for a while.
Camera1:
/camera1/color/camera_info
height: 480
width: 640
distortion_model: "plumb_bob"
D: [0.0, 0.0, 0.0, 0.0, 0.0]
K: [613.9519653320312, 0.0, 321.2705993652344, 0.0, [613.6005859375](tel:6136005859375), 233.10227966308594, 0.0, 0.0, 1.0]
calib.io result
height: 480
width: 640
D: [0.15355816666139743, -0.3289751812254585, -6.328734849366592e-05, 0.00015164782181395655, -0.042770691682211884]
K: [604.8215142379465, 0.0, 323.35292396683775, 0.0, 604.8215142379465, 231.91286204245972, 0.0, 0.0, 1.0]
Camera2:
/camera2/color/camera_info
height: 480
width: 640
distortion_model: "plumb_bob"
D: [0.0, 0.0, 0.0, 0.0, 0.0]
K: [607.23583984375, 0.0, 321.7033386230469, 0.0, 606.1530151367188, 233.6876678466797, 0.0, 0.0, 1.0]
calib.io result
height: 480
width: 640
D: [0.13463235580391597, -0.3227427564493204, 0.0010043320023022865, -0.0010660306530842795, 0.08298263852941928]
K: [599.9799799783843, 0.0, 321.2926259554301, 0.0, 599.9799799783843, 233.4189579460085, 0.0, 0.0, 1.0]
Hi @yamamotomas ,
In conclusion, ros camera_info and calibration results are mostly the same. So I'm going to leave them as it is for a while.
Right, as we expected that should not be the problem.
About the new bagfile. I created a dataset and looked into the images and the desynchronization appears to be minimal.
at most I've seem a couple of hundreds milliseconds of difference, so this bagfile is quite nice. Let me see how to calibration runs on this.
Hi @danifpdra ,
can you please share the mmtbot datasets we used for the ATOM paper? I wanted to compared against this system to try to find whats wrong.
Using a single collection and both cameras, there is a strange solution where the system places one camera on top of the other ... this is the result of the optimization
This is the tf tree
So the way the links are organized in the nrp_robot is strange:
In particular, the camera_links are usually with x pointing forward, in this case they are like the optical axis, z point forward. But even stranger is that the position of the link1 and link2 are on top of each other, and not side by side.
In any case this should not be a problem for ATOM. Will keep searching...
Hi @masakiyamamotobb ,
At last, I found the problem!!!!. In the config.yaml, the pattern is set to fixed.
Indeed, the pattern is not moving. But you are moving the camera's instead of the pattern, and that's the equivalent of having the sensors fixed and a moving pattern, which is different for all collections. So you must set the pattern to dynamic.
We use fixed patterns when the sensor moves, e.g., a hand eye calibration.
Now it calibrates well for several collections:
+------------+--------------+--------------+
| Collection | camera1 (px) | camera2 (px) |
+------------+--------------+--------------+
| 000 | 0.6360 | 0.5058 |
| 001 | 0.3268 | 0.6184 |
| 002 | 0.4101 | 0.6119 |
| 003 | 0.6498 | 0.5225 |
| Averages | 0.5057 | 0.5646 |
+------------+--------------+--------------+
This was a hard one because the inconsistency between collections led me to think this was a problem with synchronization, then some strange bug.
Very happy to know ATOM is still up and running.
I am comiting the changes to the nrp git repo.
Hi @danifpdra ,
can you please share the mmtbot datasets we used for the ATOM paper? I wanted to compared against this system to try to find whats wrong.
Hi @danifpdra , can you please share the mmtbot datasets we used for the ATOM paper? I wanted to compared against this system to try to find whats wrong.
Thanks @danifpdra .
@miguelriemoliveira Great thanks for your efforts !!! I have also recreated the same result. I'll start to apply ATOM to our robot system! https://www.tandfonline.com/doi/full/10.1080/01691864.2022.2109429
+------------+--------------+--------------+
| Collection | camera1 (px) | camera2 (px) |
+------------+--------------+--------------+
| 000 | 0.2262 | 0.2083 |
| 001 | 0.2349 | 0.2753 |
| 002 | 0.4296 | 0.3468 |
| 003 | 0.2597 | 0.2911 |
| 004 | 0.2441 | 0.2161 |
| Averages | 0.2789 | 0.2675 |
+------------+--------------+--------------+
Ok. I will close this issue.
If you need additional help, we have an system put in place where we give two workshops on Atom, one theorethical and the other more practical, and then provide support in the calibration of your robotic system. If you are interested let me know.
Best, Miguel
In my test to check whether I can use ATOM framework, I've stuck in an error. I'll appreciate it very much if you point out any of my mistakes.
This is the simplest test I can think of and its urdf and summary file is attached.
summary.pdf two_cameras.urdf.txt