ZZY-Zhou / RENet

[ICRA'23] Dataset of Moving Object Detection; Official Implementation of "RGB-Event Fusion for Moving Object Detection in Autonomous Driving"
50 stars 5 forks source link

How to Run the Object Detection Model Inference Code #4

Closed wwgjob closed 1 year ago

wwgjob commented 1 year ago

I would like to express my gratitude for providing the code for the research paper. I have encountered a small issue while running the code and would appreciate some guidance on resolving it.

Issue Description:

(RENET) chris@chris:~/Downloads/RENet/src$ python3 det.py --task stream --model ../model_best.pth --inference_dir ../src/datasets/training/zurich_city_00_a/rgb_calib/
create model
loaded ../model_best.pth, epoch 13
load model
put model to gpu
default[ WARN:0@7.990] global /croot/opencv-suite_1676452025216/work/modules/imgcodecs/src/loadsave.cpp (239) findDecoder imread_('/home/chris/Downloads/RENet/src/../data/DSEC/RGB-images/zurich_city_14_c/000001.png'): can't open/read file: check file path/integrity
Traceback (most recent call last):
  File "/home/chris/Downloads/RENet/src/det.py", line 32, in <module>
    stream_inference(opt) # 執行流式推理(stream inference)的函數
  File "/home/chris/Downloads/RENet/src/inference/stream_inference.py", line 177, in stream_inference
    for iter, data in enumerate(data_loader):
  File "/home/chris/anaconda3/envs/RENET/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 633, in __next__
    data = self._next_data()
  File "/home/chris/anaconda3/envs/RENET/lib/python3.9/site-packages/torch/utils/data/dataloader.py", line 677, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "/home/chris/anaconda3/envs/RENET/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/chris/anaconda3/envs/RENET/lib/python3.9/site-packages/torch/utils/data/_utils/fetch.py", line 51, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/chris/Downloads/RENet/src/inference/stream_inference.py", line 87, in __getitem__
    images_img = [cv2.imread(self.imagefile(v, frame + i)).astype(np.float32) for i in range(self.opt.K)]
  File "/home/chris/Downloads/RENet/src/inference/stream_inference.py", line 87, in <listcomp>
    images_img = [cv2.imread(self.imagefile(v, frame + i)).astype(np.float32) for i in range(self.opt.K)]
AttributeError: 'NoneType' object has no attribute 'astype'
ZZY-Zhou commented 1 year ago

Hello,

Thank you for your interest in our work.

The format of DSEC-MOD is designed more for a general usage, including but not limited to customised event temporal scales, for instance. Details can be found in Section III-A. E-TMA: Event-based Temporal Multi-scale Aggregation in our paper.

If you would like to get the same results in our paper, the generated event frames we used can be downloaded here.

And please do not forget to download and rename the rgb_calib in DSEC-MOD.

Hope could help.

wwgjob commented 1 year ago

Thank you for your response. Since this is my first attempt at recovering the code for the paper, there are many parts that I'm not yet familiar with. :) image

thx for helping!!

wwgjob commented 1 year ago

(renet) chris@chris:~/Downloads/RENet/src$ python3 ACT.py --task videoAP --th 0.2 --inference_dir ../data/ ERROR: Missing extracted tubes ../data/zurich_city_14_c_tubes.pkl

I apologize, but I seem to be unable to locate the .pkl file. Could you kindly provide me with information on where I might be able to find it?

ZZY-Zhou commented 1 year ago

Hi,

Happy to help.

Did you run det.py before trying to get the video mAP? Sample commands are available here.

wwgjob commented 1 year ago

image

Yes, I did. And I also get the image

wwgjob commented 1 year ago

I'm sorry for bothering you again. After trying on my own for a week, I still haven't found a solution. I still can't locate the "tubes.pkl" file.

Thank you for your patience. I appreciate your assistance.

Zongwei97 commented 1 year ago

Hi,

It seems that you did not produce the tube from the dataset. You may able to do so as shown in here and here

wwgjob commented 1 year ago

Hi,

You did not produce the tube from the dataset. You may able to do so as shown in here and here

Thanks for your reply that help me a lot. I will try it.

Brandon985 commented 7 months ago

I have the same problem as you. Would you please show me the contents of your DEC-MOD folder? I would appreciate it!

wwgjob commented 7 months ago

I am no longer working on this project, but I remember that in my DSEC_MOD folder, I placed the following content: └── DSEC_MOD ├── training │ ├── zurich_city_00_a │ │ ├── gt_bb │ │ │ ├── 000001.txt │ │ │ └── ... │ │ ├── rgb_calib │ │ │ ├── 000001.png │ │ │ └── ... │ │ └── events
│ │ └── left │ │ ├── events.h5 │ │ └── rectify_map.h5 │ └── ... └── testing ├── zurich_city_13_a │ └── ... └── ... Can you show me your folder structure? I will see if I can identify your problem.

Brandon985 commented 7 months ago

Thank you very much for your reply, I have tried to solve this problem, I am currently trying to work with visual images, thank you again! ! !

胡宇枫 @.***

 

------------------ 原始邮件 ------------------ 发件人: "ZZY-Zhou/RENet" @.>; 发送时间: 2024年4月5日(星期五) 晚上10:36 @.>; @.**@.>; 主题: Re: [ZZY-Zhou/RENet] How to Run the Object Detection Model Inference Code (Issue #4)

I am no longer working on this project, but I remember that in my DSEC_MOD folder, I placed the following content: └── DSEC_MOD ├── training │ ├── zurich_city_00_a │ │ ├── gt_bb │ │ │ ├── 000001.txt │ │ │ └── ... │ │ ├── rgb_calib │ │ │ ├── 000001.png │ │ │ └── ... │ │ └── events │ │ └── left │ │ ├── events.h5 │ │ └── rectify_map.h5 │ └── ... └── testing ├── zurich_city_13_a │ └── ... └── ... Can you show me your folder structure? I will see if I can identify your problem.

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>

qiaobendong commented 5 months ago

你好,我可以看一下数据集存放的位置和格式吗,非常感谢