Closed a1wj1 closed 9 months ago
@a1wj1 the rgb and event frames are directly concatenated as one stream [rgb, event], then, we feed them into the network for feature learning.
After decompressing the dataset, I found that it was an aedat4 file, but I did not see the relevant data processing and extraction section in the code. Could you please let me know where the data extraction and processing code is located?, Thank you for your reply!
Thank you for your reply. Does it mean that before training, we need to convert the aedat4 file to Png or JPG or bmp format to save the image, and then proceed with the training?
It contains both RGB frames and event streams. We use the event stream as event point tensor, the raw file is in the aedat4 format, you can use it as different event representations like image, voxel, point cloud, etc.
Hi. In the classification method mentioned in the paper, it is based on RGB. How did you add events when comparing and training? I saw this description in the paper: This, we train these compared methods using concatenated RGB frames and event images. But I don't quite understand how this statement is reflected in the code. Can you explain it in detail? Thank you!