timbroed / MUSES

[ECCV 2024] SDK for MUSES: The Multi-Sensor Semantic Perception Dataset for Driving under Uncertainty
https://muses.vision.ee.ethz.ch/
Other
29 stars 1 forks source link

Regarding event data #2

Open Warwick-Jocelyn opened 6 days ago

Warwick-Jocelyn commented 6 days ago

Hi,

Thanks for your code for processing the data. However, I am a little bit confused by the event data processed by your tool. It is just very little red and blue points compared with your provided visual examples from paper.

For example: REC0006_frame_043790_event_camera It is almost black. So: (1) Did I process it in a wrong way ? Is there anyway to better visualise it (I have also tried the --enlarge_event_camera_points but not helping much)? (Cause I assume little help with the current version for sensor fusion) - Any advice? (2) Or in your experiments, you actually using the longer version of the event data provided in your website (As if in your paper)?

Thanks!

timbroed commented 5 days ago

Hi,

for reproducing the input that was used for the paper baseline you should use the project_sensors_to_rgb.py script and add the --enlarge_event_camera_points flag, which adds a 2x2 dilation kernel.

These images are accumulated over 30ms. In case you want to change that you can do it in the input to this function here.

However, these inputs are not optimized for visualization. For this, we rather recommend setting all empty pixels to white and scaling each channel to a scale of 0-255 with 0 being no accumulated points and 255 being the maximum number of points. You could start by setting the fill_value to 255 here and maybe also have a look how we visualize the event camera (from the raw data) in our visualization tool.

Hope this helps.