CaoKha / Synchronization

This project was done when I was working in Trimble at Nantes as trainee. My project is to validate the time received data from different sensors on different hardware such as Raspberry Pi, Up board, Udoo x86 which was later used on a mobile robot for navigation. The goal is to synchronize data coming from camera and IMU.
2 stars 0 forks source link

Question about the time offset between Imu and Camera #2

Open lenardxu opened 2 years ago

lenardxu commented 2 years ago

By referring to your idea of Software Implementation, I found that the time offset (delay) between Imu and Image message stream is significant compared to hardware synchronization. For eg, when I use rosbag record to record both streams, the time offset at the beginning can approach 40ms. That must affect the downstreaming task, for eg, the calibration of visual inertial sensor. Have you also found such an issue in your project? If so, do you have any idea to overcome that?

CaoKha commented 2 years ago

What do you mean the terms "hardware synchronization" ? Do you mean treating the IMU and camera as one entity and use only one processor chip to drive both IMU and camera synchronously?

About your problem, perhaps, in my opinion, using a parallel computing unit such as a GPU or another processor would solve it. You could also check your processor core, make sure you have enough core for std::thread

Another solution is to design an "electrical control system" that capture both camera + IMU GPIO signals and write your own interface to communicate with ROS, This is what I have done back then. However, you also have to deal with data transmitting time overhead between the main processor (which executes your "downstream task") and the "electrical control system".

lenardxu commented 2 years ago

What do you mean the terms "hardware synchronization" ? Do you mean treating the IMU and camera as one entity and use only one processor chip to drive both IMU and camera synchronously?

About your problem, perhaps, in my opinion, using a parallel computing unit such as a GPU or another processor would solve it. You could also check your processor core, make sure you have enough core for std::thread

Another solution is to design an "electrical control system" that capture both camera + IMU GPIO signals and write your own interface to communicate with ROS, This is what I have done back then. However, you also have to deal with data transmitting time overhead between the main processor (which executes your "downstream task") and the "electrical control system".

Thanks for your reply in time!

About "Hardware Synchronization" I mean using the Imu's time stamps as cue (GPIO signals) to hardware trigger Camera's capturing action, which is just equivalent to the "electrical control system" as you mentioned. However, in my case, that hardware trigger is not supported by my camera (OAK-D) on harware level.

As you proposed, I did use the multithreading for sending Imu (in Imu worker thread) and Image message streams (in camera worker thread) as well as sending the interpolated Imu stream (in main thread), which turns out to be satisfying in terms of the time offset between each instant when Image (visual cue) is sent and the corresponding instant when interpolated Imu is sent. However, the unsolved problem is the instants, when Image and Imu messages are sent (published) at the beginning of sequence repectively, have a visible temporal offset, as I mentioned before. The reason for that is mostly the publishers' mechanism that the messages are only published once there exist corresponding subscribers, which works for both. For that, I've tried setting the check if the subscribers show up for Imu and Image topics in a loop with the same frequency before the formal loop of publishing, so that the publishing actions for both can be activated in the same instant up to the temporal difference between two threads for Imu and Image. However, the time offset problem at the beginning of sequence is still not improved much. But with that check set in the same frequency on purpose, I may know one reason causing that time offset problem is related with the subscribing, i.e., the ros master register the subscribers for Imu and Image topics with a time offset. But that offset should be very small, right?

So, I am still stuck with this problem. Do you have any idea about that?

CaoKha commented 2 years ago

Have you tried to visualize the GPIO signals using an oscilloscope to investigate if ROS not causing any overhead? Try to use an analog oscilloscope if you want more precision.

I had the same problem as you back then. The camera I used also did not support hardware trigger (I think there was a reason for it). There's no way we could control the camera by electrical pulse from outside source.

The ROS timestamp is recorded at the time it received the signal and not at the time the signal was being sent I believe. The camera manufacturer usually do some stuff (adding info to make sure the message has been successfully sent for example) before sending the timestamp message. That info is usually on their technical sheet or their interface library.

The 'visible temporal offset' you mentioned I believe is = packaging message time + data transmitting time + unpacking message time

The solution is really depending on your camera and IMU manufacturers. Try to ask them if they had some kind of timestamp system implemented on their own microcontroller so we don't have to deal with data packaging + transmitting + unpacking time overhead.

Else, you could also subtracting the 'offset' as a way to 'calibrate' the camera timestamp.

lenardxu commented 2 years ago

Have you tried to visualize the GPIO signals using an oscilloscope to investigate if ROS not causing any overhead? Try to use an analog oscilloscope if you want more precision.

I had the same problem as you back then. The camera I used also did not support hardware trigger (I think there was a reason for it). There's no way we could control the camera by electrical pulse from outside source.

The ROS timestamp is recorded at the time it received the signal and not at the time the signal was being sent I believe. The camera manufacturer usually do some stuff (adding info to make sure the message has been successfully sent for example) before sending the timestamp message. That info is usually on their technical sheet or their interface library.

The 'visible temporal offset' you mentioned I believe is = packaging message time + data transmitting time + unpacking message time

The solution is really depending on your camera and IMU manufacturers. Try to ask them if they had some kind of timestamp system implemented on their own microcontroller so we don't have to deal with data packaging + transmitting + unpacking time overhead.

Else, you could also subtracting the 'offset' as a way to 'calibrate' the camera timestamp.

Sorry for the late response. About the testing tool, i.e., oscilloscope, I currently have no access to it. Hence not able to determine the potential overhead caused by ROS. About the timestamps being reflected on the ROS subscriber side, I think you're right. I did find the unfixed delay on the subscriber's side. That should further depend on variations of subscribers' implementation. And thanks for your detailed hint about the calculation of 'temporal offset'. Maybe afterwards I'll try accessing the related data and use this formula for any possible optimization. But I know that this temporal offset also depends on the ROS master's management of subscribers, which will cause temporally unfixed feedback to publishers when the subscribers show up. So, a hard-coded compensation may not work. Instead, I use the getNumSubscribers() for the same publisher, say Imu publisher, for both Image and Imu publishers, and then a 'long enough' sleep to make sure that each publisher has actually waited for its subscriber showing up. However, the solution above is still not enough, because I found an constantly existing problem that there're duplicate timestamps in the Imu messages after I check them by using 'rosbag' and saving them locally. I suppose you might have encountered such problem, since I've referred to your basic idea. After a long debugging, I've found that your program should suffer "data racing" problem between Imu worker thread of publishing (incl. writing) Imu data regularly and Main thread of publishing (incl. writing) interpolated Imu data. That caused such problem. A better solution is to create a specific data structure enclosing Imu data to deal with the "data racing" problem.

CaoKha commented 2 years ago

What is the data structure you are proposing? I believe the reason causing ROS to return 2 times the same timestamp values successively is because of the buffer reading speed being way faster than the writing speed. Back then, my instructor required a solution in real-time so the 'so-called' delayed interpolation method (doing the interpolation in another the thread) was off the table (still didn't know why my instructor did not accept this solution). If real-time is not your priority, I think you could also apply Taylor-series or some "smooth" functions to the interpolation output to reduce the noise cause by "data-racing" or just doing the interpolation in another thread.

lenardxu commented 2 years ago

Upon my observation in my case, that phenomenon is caused by writing data to Imu msg in both Imu and main threads in the way one thread's writing without knowing the other's writing is finished, the so-called 'data racing'. It happens typically via the Imu thread writing data to Imu message preemptively approaching the instant of interpolation being finished. A specific data structure I'm thinking of is to get started from relaying objects (containing the data from Imu) from one single "producer" thread to one single "consumer" thread without any locks. But it's still an idea without testing. Speaking of the real-time, this is required in my case. Hence, your way of interpolation is preferred in my case. And I guess the number of threads in use should be limited since I am using a RPI.

CaoKha commented 2 years ago

In that case, I think you could also try using raw pointers instead of std::string (avoid heap allocation). https://godbolt.org/ is good place to test performance.

lenardxu commented 2 years ago

I've never thought about that. Deserves a try! Thx!

lenardxu commented 2 years ago

I'd like to ask another question not directly related with this topic. Have you ever use the visual-inertial sensor based on the implementation of synchronization upon linear interpolation for the calibration of VI sensor? For eg, using Kalibr tool (https://github.com/ethz-asl/kalibr)?