Closed masakiyamamotobb closed 1 year ago
Hi @masakiyamamotobb ,
thanks for trying out ATOM. I will try to help. It would be very helpful if you could share the github repo with the code. If you want it private you can add me. That way it is much easier for me to test from my side. To run on my side, I would also need you to share the dataset produced by the data collector.
About the issue, tt seems that there is a problem with the dataset. Luckily, ATOM produces the calibrated json before the xacro file, so if you share the calibrated json (should be called atom.json or similar), I can take a look at it.
Thanks, @miguelriemoliveira , Can you see our environment here ? https://gitlab.com/naripa/nrp_calibration_atom If it's OK, I'm going to send you the data files via Google Drive.
Hi @masakiyamamotobb ,
yes I can. I see a docker file which I normally do not use. Can you please give me some hints on how to run ?
Thanks
Then this is the google drive. https://drive.google.com/drive/folders/1sXJV7O6BmLuAkpvvhrtd_cNSoVVNyr5y?usp=sharing
my_calib
is the folder of my calibration test.
data_2023-02-10-18-39-26.bag
is rosbag file I used.
Thanks.
Can you please give me some hints on how to run ?
As I don't have knowledge on docker (just a user), I've asked my colleague to comment on this.
OK, thanks.
@miguelriemoliveira He's added a Quick Start. Does this work for you ?
My routine to test ATOM is
Thanks. I will give it a try and get back to you.
Hi @masakiyamamotobb ,
I was testing and could not make it work. Several problems using the setup, but I think I could solve them.
Now when running the BUILD-DOCKER-IMAGE.bash I have these issues:
nrp_calibration_atom git:(master) ✗ ./BUILD-DOCKER-IMAGE.bash ./BUILD-DOCKER-IMAGE.bash: DOCKER_PROJECT=mike_nrp_calibration_atom ./BUILD-DOCKER-IMAGE.bash: DOCKER_CONTAINER=mike_nrp_calibration_atom_nrp_1 permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json?all=1&filters=%7B%22name%22%3A%7B%22mike_nrp_calibration_atom_nrp_1%22%3Atrue%7D%7D": dial unix /var/run/docker.sock: connect: permission denied
Dockerfile version to be pulled or built: bc0c33f commit bc0c33f1d963d20c5912c20dcbba4c0030ab8632 Author: Gustavo Garcia garcia-g@em.ci.ritsumei.ac.jp Date: Sat Feb 18 09:27:11 2023 +0900
initial commit
- Login into 'registry.gitlab.com'. Enter your GitLab.com credentials below: Username: miguelriemoliveira Password: permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/auth": dial unix /var/run/docker.sock: connect: permission denied WARNING: The DOCKER_RUNTIME variable is not set. Defaulting to a blank string. Pulling nrp ...
ERROR: for nrp ('Connection aborted.', PermissionError(13, 'Permission denied')) ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable. WARNING: The DOCKER_RUNTIME variable is not set. Defaulting to a blank string. Building nrp ERROR: Couldn't connect to Docker daemon at http+docker://localhost - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
Can't you just share the repository where your nrp calibration package is? That way it would be easier for me to test from my side ...
Well, give a couple more days. I found your repo in the drive (you shared it), so perhaps I can bypass docker altogether.
Hi @masakiyamamotobb ,
did not forget about this. Good news is I am now able to replicate your bug. Will try to solve it later.
The same error occurs for the dataset.json standard dataset.
Hi @masakiyamamotobb ,
I think I found the problem. Your two_cameras.xacro has some joints which do not contain the origin property.
Look here:
(...)
<joint name="world_link2_joint" type="fixed">
<origin xyz="-0.1 -0.2 0.3" rpy="0 0 1.57"/>
<parent link="world"/>
<child link="link2"/>
</joint>
<joint name="link1_camera1_joint" type="fixed">
<parent link="link1"/>
<child link="camera1_link"/>
</joint>
(...)
So when trying to write the xacro property origin the code breaks. I am not sure if ATOM should be prepared to handle this case or if that origin property should be there. I have opened #559 to track this and will work on it.
In any case if you want you can proceed if you fix your xacro.
Hi again @masakiyamamotobb ,
BTW, I noticed the values in your table are very high.
+------------+--------------+--------------+ | Collection | camera1 (px) | camera2 (px) | +------------+--------------+--------------+ | 000 | 24.1501 | 34.3111 | | 001 | --- | 69.0258 | | 002 | 22.3095 | 89.6992 | | 003 | 11.9355 | 29.1633 | | 004 | 94.8835 | 119.2347 | | 005 | 30.6875 | 54.6006 | | 006 | 75.3939 | 65.1903 | | Averages | 43.2267 | 65.8893 | +------------+--------------+--------------+
We should end up with accuracies of under 1 pixel. I think there is(are) other problems with your calibration. @JorgeFernandes-Git or @Kazadhum can you please post the link to the two cameras atom calibration example you developed? That should help @masakiyamamotobb .
Hi @miguelriemoliveira and @masakiyamamotobb.
At first glance, this calibration looks similar to ours. Here is the repo: https://github.com/JorgeFernandes-Git/t2rgb
If you need any help, just ask.
Thanks @JorgeFernandes-Git
@miguelriemoliveira Error disappeared. Thank you so much.
We should end up with accuracies of under 1 pixel. I think there is(are) other problems with your calibration.
I'm trying to find the reason for this.
@JorgeFernandes-Git I have checked your repo and found that there is a difference in config.yml file. My case: anchored_sensor: "camera1" Your case: anchored_sensor: ""
Does this have a big impact to calibration results.
Hi @masakiyamamotobb
It is not mandatory to employ an anchored sensor, unless its specific location is known and a constant pose is required throughout the calibration process.
As far as I know, it has been observed that the implementation of anchored sensors with ATOM has resulted in inferior calibration results.
Does this have a big impact to calibration results.
Not really. It can produce different results, but not the magnitude of your errors.
@miguelriemoliveira @JorgeFernandes-Git Thank you for your comments on anchored_sensor. This time, I checked the difference of urdf models. I still cannot find clear difference.
Hi @masakiyamamotobb ,
I do not think the large calibration errors you have are due to the structure of the tf tree.
I think it has to do with your dataset. Did you check the dataset manually using the dataset playback?
Are the detections of the pattern corners accurate?
Thanks @miguelriemoliveira,
I still have some problems in dataset playback
, and cannot check the pattern detection results.
But, I can see them in collect_data.launch
. The detection seems to be reasonably stable throughout the rosbag record.
One thing I come to notice is that camera resolution is set low without my notice. Does this produce any problem ?
Hi @masakiyamamotobb ,
The detection seems to be reasonably stable throughout the rosbag record
Right, they seem fine.
One thing I come to notice is that camera resolution is set low without my notice. Does this produce any problem ?
It might if the images in your bag file have a different resolution ... can you run a
rostopic echo
and post the result? For both cameras ...
Thanks again, @miguelriemoliveira
For camera1,
seq: 321
stamp:
secs: 1676021968
nsecs: 719537258
frame_id: "camera1_color_optical_frame"
height: 480
width: 640
encoding: "rgb8"
is_bigendian: 0
step: 1920
data: "<array type: uint8, length: 921600>"
For camera2,
seq: 294
stamp:
secs: 1676021967
nsecs: 815694094
frame_id: "camera2_color_optical_frame"
height: 480
width: 640
encoding: "rgb8"
is_bigendian: 0
step: 1920
data: "<array type: uint8, length: 921600>"
The images have consistent resolution with the camera_info messages, so although the cameras could have better resolution I do not think that is the problem giving you those very large errors.
I will try it from my side and get back to you.
Hi @masakiyamamotobb ,
So I went to have a look at the problem of the large reprojection errors. . In the images bellow, the squares are the ground truth detections of the pattern corners, the crosses are the initial projections of the corners, and the dots are the projections after the optimization. So a good calibration will put the dots inside the squares.
My first guess is that the collections are not consistent between them. This can occur due lack of time synchronization betweem the sensors and a fast moving pattern.
So I used the csf flag to calibrate only for collection 000, i.e.:
rosrun atom_calibration calibrate -json dataset.json -v -rv -si -uic -csf "lambda x: x in ['000']"
Only collection 0
-----------------------------
Optimization finished in 73.81124 secs: `xtol` termination condition is satisfied.
Errors per collection (anchored sensor, max error per sensor, not detected as "---")
+------------+--------------+--------------+
| Collection | camera1 (px) | camera2 (px) |
+------------+--------------+--------------+
| 000 | 0.7051 | 0.6362 |
| Averages | 0.7051 | 0.6362 |
+------------+--------------+--------------+
Only collection 5
Optimization finished in 69.20231 secs: xtol
termination condition is satisfied.
Errors per collection (anchored sensor, max error per sensor, not detected as "---")
+------------+--------------+--------------+
| Collection | camera1 (px) | camera2 (px) |
+------------+--------------+--------------+
| 005 | 2.0374 | 2.0363 |
| Averages | 2.0374 | 2.0363 |
+------------+--------------+--------------+
My conclusion is that your collections are somehow inconsistent. This is normal when you move the pattern fast and the cameras are not synced by hardware trigger. As a result, the images from both sensors in a collection were captured at slightly different time instants. Add to that a moving pattern and the result is that the optimal solution (the transformation between sensors) will be different for each collection.
The errors you got of 25 pixels is what ATOM can get trying to "find the best of two worlds".
We have a methodology to avoid this problem. You should carefully read this part of the manual:
https://lardemua.github.io/atom_documentation/procedures/#collect-data
and try to take a new bagfile and a new dataset.
Also, perhaps these issues will also help you get a feeling of the common problems people face when calibrating their system with ATOM:
https://github.com/lardemua/atom/issues/539 https://github.com/lardemua/atom/issues/498
Thanks for your efforts, @miguelriemoliveira !
While trying to reproduce your result, my PC breaks.
I'm right now preparing another environment in another PC.
Here I found a strange phenomena in roslaunch my_calib collect_data.launch
.
It play backs rosbag file and I can save collection
, but images are not saved in an output folder.
What kind of trouble can you expect in this ?
collect_data.launch
<?xml version="1.0"?>
<!--
█████╗ ████████╗ ██████╗ ███╗ ███╗
██╔══██╗╚══██╔══╝██╔═══██╗████╗ ████║
███████║ ██║ ██║ ██║██╔████╔██║
██╔══██║ ██║ ██║ ██║██║╚██╔╝██║
__ ██║ ██║ ██║ ╚██████╔╝██║ ╚═╝ ██║ _
/ _| ╚═╝ ╚═╝ ╚═╝ ╚═════╝ ╚═╝ ╚═╝ | |
| |_ _ __ __ _ _ __ ___ _____ _____ _ __| | __
| _| '__/ _` | '_ ` _ \ / _ \ \ /\ / / _ \| '__| |/ /
| | | | | (_| | | | | | | __/\ V V / (_) | | | <
|_| |_| \__,_|_| |_| |_|\___| \_/\_/ \___/|_| |_|\_\
https://github.com/lardemua/atom
-->
<!-- WARNING WARNING WARNING WARNING auto-generated file!! -->
<!-- Only modify this file if you know what you are doing! -->
<!--
@file collect_data.launch Runs bringup collecting data from a bag file.
@arg output_folder Directory where the data will be stored.
@arg overwrite If true, it will overwrite any existing output folder.
@arg marker_size The size of the interaction marker that is used to trigger a data save.
@arg bag_file Absolute path to the playing bag.
default: /root/nrp/data_2023-02-10-18-39-26.bag
@arg bag_start Playback starting time (in seconds). default: 0.0
@arg bag_rate Playback rate. default: 1.0
-->
<launch>
<!-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -->
<!-- Parameters-->
<arg name="output_folder" default="$(find my_calib)/output"/>
<!-- folder of the output dataset -->
<arg name="overwrite" default="true"/>
<!-- overwrite output folder if it exists -->
<arg name="marker_size" default="0.5"/>
<arg name="config_file" default="$(find my_calib)/calibration/config.yml"/>
<arg name="rviz_file" default="$(find my_calib)/rviz/collect_data.rviz"/>
<arg name="description_file" default="$(find my_calib)/urdf/initial_estimate.urdf.xacro"/>
<!-- arguments to be passed onto playbag.launch -->
<arg name="bag_file" default="/root/nrp/data_2023-02-10-18-39-26.bag"/>
<arg name="bag_start" default="0"/>
<arg name="bag_rate" default="1"/>
<arg name="ssl" default="lambda sensor_name: False"/>
<!-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -->
<!-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -->
<!-- Call play bag launch file -->
<include file="$(find my_calib)/launch/playbag.launch">
<arg name="rviz_file" value="$(arg rviz_file)"/>
<arg name="bag_file" value="$(arg bag_file)"/>
<arg name="bag_start" value="$(arg bag_start)"/>
<arg name="bag_rate" value="$(arg bag_rate)"/>
</include>
<!-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -->
<!-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -->
<!-- Start data collector node -->
<group if="$(arg overwrite)">
<node name="collect_data" pkg="atom_calibration" type="collect_data"
args="-s $(arg marker_size) -o $(arg output_folder) -c $(arg config_file) -ssl '$(arg ssl)' --overwrite" required="true"
output="screen"/>
</group>
<group unless="$(arg overwrite)">
<node name="collect_data" pkg="atom_calibration" type="collect_data"
args="-s $(arg marker_size) -o $(arg output_folder) -c $(arg config_file) -ssl '$(arg ssl)'" required="true"
output="screen"/>
</group>
<!-- %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% -->
</launch>
It play backs rosbag file and I can save collection, but images are not saved in an output folder. What kind of trouble can you expect in this
can you post the prints from the terminal? How do you run the launch file?
Also, I am not really sure what your code is anymore (you might have changed it).
Can I create a github for our tests? That way we can easily synchronize?
Can I create a github for our tests? That way we can easily synchronize?
Thank you so much.
That's great idea.
Can you see my_calib_test
branch in https://gitlab.com/naripa/nrp_calibration_atom
Hi @masakiyamamotobb and @yamamotomas ,
I could not get the docker running on the gitlab, so I created a github repo containing the source code for the nrp_calibration package. I also included the dataset you gave me.
miguelriemoliveira/atom_calibration_nrp.git
I will use this for testing ...
Hi @masakiyamamotobb ,
so I was trying to produce a dataset but could not because the bagfile you gave me is too small, only about 1.5 secs.
Can you give me a larger bag file, with 30 secs or a minute? With that I should be able produce a dataset.
@miguelriemoliveira Sorry for the trouble. Please give me some time until I can retrieve a longer bag file from my old PC.
Sure.
By the way, we have a protocol in place to assist companies in calibrating their robotic systems. The idea is to setup a 1 or 2 month consulting project where we can have online sessions and we will guide you trough the calibration of your system in ATOM, both in real and simulated cases. This has an accelerating effect since we have extensive know how in the calibration of many robotic systems.
If you are interested let me know and we can discuss it in detail.
Best regards, Miguel
This has an accelerating effect since we have extensive know how in the calibration of many robotic systems.
Thanks.
I'm currently working in a university project. If this is successful, I'm planning to introduce ATOM to company's projects !
As for the rosbag file, I could not retrieve it. So this is a brand new data at https://drive.google.com/file/d/13wo9utlJer5mWi7fqUFIzwLE0nxHrcne/view?usp=share_link
Hi @masakiyamamotobb ,
So I was trying out the data collection using your bagfile and it works fine.
First I had to reconfigure the calibration package because I changed the bagfile in the calibration/config.yaml.
rosrun nrp_calibration configure
Then I ran the collect data script and it worked fine:
roslaunch nrp_calibration collect_data.launch output_folder:=/home/mike/workspaces/catkin_ws/src/calibration/robots/atom_calibration_nrp/datasets/dataset2 overwrite:=true
I saved some collections and got back a normal dataset as expected.
Not sure why it is not working from your side ... how do you launch the data collector? And what is the terminal output once you press the "save collection" option in rviz?
This is a "normal" terminal output in that case:
Save collection selected81461657.616514 Duration: 21.051123 / 42.679712
Locked all labelerse: 1681461657.650111 Duration: 21.084720 / 42.679712
reference_time: 1681461657.668572152 max_time: 1681461657.797492432 durations = [0.12273880299999995, 0.12892028099999997] max_duration = 0.12892028099999997 Times: camera1: 1681461657.545833349 camera2: 1681461657.539651871 [INFO] [1681545821.588713, 1681461657.668572]: Max duration between msgs in collection is 0.006181478 Collected transforms for time 1681461657.797492432 Collecting data from camera1: sensor_key Collecting data from camera2: sensor_key output_folder is: /home/mike/workspaces/catkin_ws/src/calibration/robots/atom_calibration_nrp/datasets/dataset2 Saved file /home/mike/workspaces/catkin_ws/src/calibration/robots/atom_calibration_nrp/datasets/dataset2/camera1_001.jpg. Saved file /home/mike/workspaces/catkin_ws/src/calibration/robots/atom_calibration_nrp/datasets/dataset2/camera2_001.jpg. Saved json output file to /home/mike/workspaces/catkin_ws/src/calibration/robots/atom_calibration_nrp/datasets/dataset2/dataset.json.
Thank you very much @miguelriemoliveira
I have cloned your repository and confirmed that data collection works all right. As you have pointed out, my first rosbag file was too short in its duration, and that seems to have caused a problem.
Next I tried to calibrate with your dataset. Again big pixel errors show up.
I'll try to calibrate my cameras after fixing my working environment.
root@nrp:~/nrp# rosrun atom_calibration calibrate -json /root/nrp/catkin_ws/src/atom_calibration_nrp/datasets/dataset2/dataset.json
Skipped loading images and point clouds for collections: [].
Deleted collections: [] because these are incomplete. If you want to use them set the use_incomplete_collections flag.
Deleted collections: []: at least one detection by a camera should be present.
After filtering, will use 3 collections: ['000', '001', '002']
Loaded dataset containing 2 sensors and 3 collections.
Selected collection key is 000
Initializing optimizer...
Anchored sensor is camera1
Creating parameters ...
Creating residuals ...
RNG Seed: 5944334777327169532
Computing sparse matrix ...
Initializing optimization ...
One optimizer iteration has 13 function calls.
Starting optimization ...
Iteration Total nfev Cost Cost reduction Step norm Optimality
0 1 8.6988e+04 4.68e+05
1 2 8.1789e+04 5.20e+03 8.81e-02 4.36e+05
2 3 8.0893e+04 8.96e+02 2.33e-02 4.29e+05
>>>>> cut >>>>>
56 57 1.0199e+04 1.21e+00 7.68e-03 7.78e+02
57 58 1.0198e+04 1.06e+00 9.96e-03 2.32e+03
58 59 1.0197e+04 1.00e+00 5.93e-03 7.71e+02
`ftol` termination condition is satisfied.
Function evaluations 59, initial cost 8.6988e+04, final cost 1.0197e+04, first-order optimality 7.71e+02.
-----------------------------
Optimization finished in 2.31235 secs: `ftol` termination condition is satisfied.
Errors per collection (anchored sensor, max error per sensor, not detected as "---")
+------------+--------------+--------------+
| Collection | camera1 (px) | camera2 (px) |
+------------+--------------+--------------+
| 000 | 29.7587 | 10.1002 |
| 001 | 87.7173 | 110.1753 |
| 002 | 60.1600 | 101.1494 |
| Averages | 59.2120 | 73.8083 |
+------------+--------------+--------------+
output_folder is: /root/nrp/catkin_ws/src/atom_calibration_nrp/datasets/dataset2
Saved json output file to /root/nrp/catkin_ws/src/atom_calibration_nrp/datasets/dataset2/atom_calibration.json.
Optimized xacro saved to /root/nrp/catkin_ws/src/atom_calibration_nrp/nrp_calibration/urdf/optimized/optimized_2023_04_18-10_07_58_AM.urdf.xacro . You can use it as a ROS robot_description.
Hi @masakiyamamotobb ,
Next I tried to calibrate with your dataset. Again big pixel errors show up.
But I was not worried about making a good dataset, just to see if creating a dataset would work. I will try to create a dataset and get back to you.
Created a new dataset3 being careful about the time in which each collection was captured.
Dataset contains 2 sensors: ['camera1', 'camera2']
Complete collections (6):['000', '001', '002', '003', '004', '005']
Incomplete collections (0):[]
Sensor camera1 has 4 complete detections: ['001', '002', '003', '004']
Sensor camera1 has 2 partial detections: ['000', '005']
Sensor camera2 has 6 complete detections: ['000', '001', '002', '003', '004', '005']
Sensor camera2 has 0 partial detections: []
+------------+-------------+----------+----------+
| Collection | is complete | camera1 | camera2 |
+------------+-------------+----------+----------+
| 000 | yes | partial | detected |
| 001 | yes | detected | detected |
| 002 | yes | detected | detected |
| 003 | yes | detected | detected |
| 004 | yes | detected | detected |
| 005 | yes | partial | detected |
+------------+-------------+----------+----------+
Hi again @masakiyamamotobb ,
ran a calibration on dataset3, got the following results:
Optimization finished in 77.40824 secs: `ftol` termination condition is satisfied.
Errors per collection (anchored sensor, max error per sensor, not detected as "---")
+------------+--------------+--------------+
| Collection | camera1 (px) | camera2 (px) |
+------------+--------------+--------------+
| 000 | 161.7505 | 142.6608 |
| 001 | 22.6660 | 37.4999 |
| 002 | 100.1628 | 155.0932 |
| 003 | 47.7767 | 56.2899 |
| 004 | 143.3030 | 150.5997 |
| 005 | 161.7538 | 142.6939 |
| Averages | 106.2355 | 114.1396 |
+------------+--------------+--------------+
So we still have large errors. I would now check the initial guess, which seems to be very bad due to the initial values. I will get back to you...
Created a new first guess for the initial poses of the sensors. The first guess from the xacro was very far from the correct place.
I am assuming from the images that camera 2 was on the left of camera 1. Is that right @masakiyamamotobb ?
Hi @masakiyamamotobb ,
I created a new dataset with this better initial estimate but the calibration results did not improve. Not sure why ...
I will go back to this some time this week.
Some questions:
Did the values you insert in the calibration.yaml file make sense to you?
# It can be a scalar (same border in x and y directions), or it can be {'x': ..., 'y': ,,,}
border_size: { 'x': 0.05, 'y': 0.05 }
# The number of corners the pattern has in the X and Y dimensions.
# Note: The charuco detector uses the number of squares per dimension in its detector.
# Internally we add a +1 to Y and X dimensions to account for that.
# Therefore, the number of corners should be used even for the charuco pattern.
dimension: { "x": 4, "y": 4 }
# The length of the square edge.
size: 0.02
# The length of the charuco inner marker.
inner_size: 0.016
Did you calibrate the intrinsics of the cameras in ROS?
This is the photo of two cameras. Camera1 is right in the picture.
Thanks again, @miguelriemoliveira
Did the values you insert in the calibration.yaml file make sense to you?
border_size
doesn't make sense, but dimension
, size
and inner_size
make sense.
Did you calibrate the intrinsics of the cameras in ROS?
That's what I'm not sure about. These camera come from some other projects and I cannot trace its calibration history. I have calib.io calibrator software and their board, so I'm planning to calibrate the cameras ASAP. (Unfortunately, I'm fixing my PC and I hope I can do it by the end of this week)
Thanks, so that means that camera1 is on the left of camera 2 indeed.
I am at the same point: I can calibrate with subpixel accuracy for a single collection, but when two collections or more are used the errors jump to 50 or 80 pixels.
I know you already did a new bagfile, and that you were careful to stop the camera for a couple of secs every time you moved it. But I am not sure about the magnitude of the possible de-synchronization between cameras, so I would ask if you can take a third bagfile where you stop for 5 or 6 secs every time you move the pattern.
Then I can test again being sure that the de-synchronization is not a problem.
That's what I'm not sure about. These camera come from some other projects and I cannot trace its calibration history. I have calib.io calibrator software and their board, so I'm planning to calibrate the cameras ASAP. (Unfortunately, I'm fixing my PC and I hope I can do it by the end of this week)
I took a look at your camera_info messages and they look fine so I do not think the problem comes from there. In any case, if you want you could calibrate the rgb of the cameras using
http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration
I am running a new calibration for a single collection.
Although I get small errors, the solution does not seem fine. The camera's are not side by side as they should be.
Will try to investigate this later.
But I am not sure about the magnitude of the possible de-synchronization between cameras, so I would ask if you can take a third bagfile where you stop for 5 or 6 secs every time you move the pattern.
I have taken another bagfile. This time, I've also included stop watch so that the time difference is visible.
data_2023-04-21-05-51-20.bag in https://drive.google.com/file/d/13wo9utlJer5mWi7fqUFIzwLE0nxHrcne/view?usp=share_link
Thanks.
In my test to check whether I can use ATOM framework, I've stuck in an error. I'll appreciate it very much if you point out any of my mistakes.
This is the simplest test I can think of and its urdf and summary file is attached.
summary.pdf two_cameras.urdf.txt