ZZY-Zhou / RENet

[ICRA'23] Dataset of Moving Object Detection; Official Implementation of "RGB-Event Fusion for Moving Object Detection in Autonomous Driving"
50 stars 5 forks source link

Incompatibility of Dataset #2

Closed danielgehrig18 closed 1 year ago

danielgehrig18 commented 1 year ago

Hi, thanks for the dataset and the excellent work.

I was wondering how to run training with the current code base. After looking at the data loader, I saw a .pkl file missing. Can you upload this somewhere? Alternatively, is here a way to compute it from the current dataset?

Thanks for your help.

ZZY-Zhou commented 1 year ago

Hi,

You may now find the DSEC-GT.pkl we've used in the folder data.

Our Ground Truth pkl file is a dictionary, with format:

danielgehrig18 commented 1 year ago

Thanks! By the way, I also saw that you need coco pretrained weights from here: https://github.com/ZZY-Zhou/RENet/blob/main/src/MOD_utils/model.py#L166. Where can I download them?

Zizzzzzzz commented 1 year ago

Hi,

You may now find the DSEC-GT.pkl we've used in the folder data.

Our Ground Truth pkl file is a dictionary, with format:

  • labels: list of labels. For our DSEC-MOD, it is Moving.
  • gttubes: dictionary of ground truth tubes for each sequence. Each tube has 5 columns: frame_number, x1, y1, x2, y2.
  • nframes: dictionary of number of frames for each sequence.
  • train_videos: list of training sequences' names.
  • test_videos: list of testing sequences' names.
  • resolution: dictionary of tuple (h,w) for each sequence's resolution. For our DSEC-MOD, it's (480, 640).

Hi, it is a wonderful job. I am wondering whether there are categories labels.

ZZY-Zhou commented 1 year ago

Thanks! By the way, I also saw that you need coco pretrained weights from here: https://github.com/ZZY-Zhou/RENet/blob/main/src/MOD_utils/model.py#L166. Where can I download them?

Hello,

To get the pre-trained weights, you may now check the download links here.

ZZY-Zhou commented 1 year ago

Hi, You may now find the DSEC-GT.pkl we've used in the folder data. Our Ground Truth pkl file is a dictionary, with format:

  • labels: list of labels. For our DSEC-MOD, it is Moving.
  • gttubes: dictionary of ground truth tubes for each sequence. Each tube has 5 columns: frame_number, x1, y1, x2, y2.
  • nframes: dictionary of number of frames for each sequence.
  • train_videos: list of training sequences' names.
  • test_videos: list of testing sequences' names.
  • resolution: dictionary of tuple (h,w) for each sequence's resolution. For our DSEC-MOD, it's (480, 640).

Hi, it is a wonderful job. I am wondering whether there are categories labels.

Hi,

Thanks for your interest in our work.

The class labels are not available for the moment. For now, in our DSEC-MOD dataset, we do not distinguish the semantic labels, they are considered as "Moving".

Whether to add categorical labels or difficulty-degree labels (like Kitti) to our DSEC-MOD is not decided yet. This depends also on our future research plan. If the labels are available one day, we will surely update our Github.

Zizzzzzzz commented 1 year ago

OK, thank you. I have another question. Which event representation method do you use? @ZZY-Zhou

ZZY-Zhou commented 1 year ago

@Zizzzzzzz Hello, We use frame-like event representation, details can be found in Section III-A. E-TMA: Event-based Temporal Multi-scale Aggregation in our paper.

Zizzzzzzz commented 1 year ago

In the code, there are images_event, images_event_30ms, images_event_50ms which are read from folder and have three channels. But there is no code to generate these images. Can you tell me how can I generate these images? @ZZY-Zhou

ZZY-Zhou commented 1 year ago

@Zizzzzzzz Hi, to get the same results in our paper, the generated event frames we used can be downloaded here.

Zizzzzzzz commented 1 year ago

@Zizzzzzzz Hi, to get the same results in our paper, the generated event frames we used can be downloaded here.

Thank you!

wwgjob commented 1 year ago

Hello, I am curious and would like to ask how did you implement the situation of the paper experiment? Thanks for helping.

ZZY-Zhou commented 1 year ago

@wwgjob Hello, basically, you could download the data following readme. Then, you can use our provided ckpt to reproduce the reported results.