carnegierobotics / multisense_ros

ROS Wrapper for LibMultiSense
Other
7 stars 18 forks source link

Methods to record the data using multisense S30 #92

Closed poornimajd closed 2 months ago

poornimajd commented 3 months ago

Hi,

I'm interested in conducting a comparative analysis of different depth modes for a specific scene. Specifically, I want to record multiple depth modes simultaneously to evaluate their accuracy against one another. For instance, the ZED camera SDK allows for the recording of SVO files, which effectively simulate the live camera feed during playback and I can change the depth modes as needed. I'm looking for a similar functionality for this camera.

Is there a way to record a general video file using ROS, so that I can adjust the disparity values during playback as needed? Currently, I can only see the Multisense ROS driver and rosbag file option can help me record the data, but I will have to record multiple bag files, and since the scene is dynamic, it will not guarantee the same scene for all the mode comparison. So I am open to other suggestions or methods you might recommend for recording the data.

Could you please provide any guidance or alternatives?

Thank you!

mattalvarado commented 3 months ago

Hi @poornimajd,

Thanks for reaching out. Could you elaborate more on what you mean by multiple depth modes? How were you looking to adjust the stereo values in playback?

poornimajd commented 3 months ago

Hi, Thank you so much for the reply!

By multiple depth modes, I meant that I wanted to experiment with the different disparity values, that is 64, 128, and 256.

So given a single recording, I wanted to adjust these values while replaying the recording, so that the same recorded scene is used for all the 3 different disparity values.

mattalvarado commented 3 months ago

Hi @poornimajd,

The MultiSense computes the disparity image onboard the camera, so there is not a great way to re-run the disparity algorithm in post. We do have ways to re-run collected images through the camera’s stereo pipeline, but you would need to contact our support team at support@carnegierobotics.com to work on setting that up.

Since you are just looking at adjusting the disparity values, you could approximate the performance by selecting the max disparity setting of 256 pixels, and invalidating any values above 64 or 128. Larger disparity values allow you to compute disparity for objects closer to the camera, which generally is preferred by users unless they are explicitly trying to filter on range. Additionally there is no performance impact between any of the disparity settings, so we generally recommend sticking with the 256 pixel setting.

poornimajd commented 3 months ago

Thank you for the detailed and quick reply @mattalvarado , This helps and gives some more clarity. I will try out the max disparity using rosbags for now, as suggested, else will reach out to the support team as needed.

Thank you