Open FrescoDev opened 5 months ago
Generation of Event Data:
Our event data is generated using the event simulator vid2e. After generating the events, we apply a temporal reversal to create a backward version, making them suitable for our bidirectional network. These events are then converted into event voxels as code.
Implementation Details:
The data preparation process involves several different code libraries. For event data processing, you can refer to the event_utils library. We will be organizing and sharing the detailed data preparation steps soon.
Test Data Adaptation:
The event simulator already includes a sufficient simulation of real events. You can refer to the paper ESIM for more details. In our implementation, we did not need to make unique adjustments to align the simulated data with the model.
Thank you for your interest in our work!
Generation of Event Data:
Our event data is generated using the event simulator vid2e. After generating the events, we apply a temporal reversal to create a backward version, making them suitable for our bidirectional network. These events are then converted into event voxels as code.
Implementation Details:
The data preparation process involves several different code libraries. For event data processing, you can refer to the event_utils library. We will be organizing and sharing the detailed data preparation steps soon.
Test Data Adaptation:
The event simulator already includes a sufficient simulation of real events. You can refer to the paper ESIM for more details. In our implementation, we did not need to make unique adjustments to align the simulated data with the model.
Thank you for your interest in our work!
That's all really clear and helpful, thank you for the insights!
@FrescoDev Hi, we have released our data preparation details at DataPreparation.md. Hope it helps you. Thanks!
Description The model code assumes that event video data is included as part of the inputs. However, it's unclear how this was managed for the demonstration purposes, especially since the test datasets like Vid4 and REDS seem to be captured using traditional frame-based cameras.
Could you please provide details on the following:
Generation of Event Data: How was event data generated or simulated for the demonstration? Was there a specific method or tool used to convert traditional frame-based footage into event-based data?
Implementation Details: Any specific scripts or code examples used to achieve this conversion would be highly appreciated. Understanding the methodology would help in replicating the demo setup accurately.
Test Data Adaptation: If the event data was simulated, what adjustments or preprocessing steps were necessary to align this data with the model's requirements?