ZwwWayne / mmMOT

[ICCV2019] Robust Multi-Modality Multi-Object Tracking
252 stars 25 forks source link

Evaluation using RRCNet det results #4

Open Kay1794 opened 4 years ago

Kay1794 commented 4 years ago

Hello! Thank you for sharing your work. I have did the evaluation using PointPillars detection results and it aligned with yours. However, when I tried to use rrc results, I couldn't get the evaluation properly.
I was wondering if you have any idea about how to modify the code for rrc det (2D det). I checked the config file for rrc, the det type is still '3D'. I have tried changed it to 2D but the problem still exists.

ZwwWayne commented 4 years ago

Hi @Kay1794 , I do not quite understand your question. Do you meet bugs or something else? The config should work, and you need to check the Tensorboard to see when the model performs best on Val set.

Kay1794 commented 4 years ago

Hi @ZwwWayne . Sorry for the confusion. The problem here is when I run experiment "rrc_pfv_40e_subabs_dualadd_C" using the RCC detection results you provided, I got 0 results for all evaluation metrics ( see figures below).

image image

I should mention that I used the same logic to modify the config file for pp results and it went well. I thought there might be bugs in rcc evaluation part.

ZwwWayne commented 4 years ago

Hi @Kay1794 , This is because the code does not pass the results to the pymotmetrics for evaluation. After the refactoring, the code use the KITTI's evaluation metric rather than pymot. You should check the results below the Processing Results for KITTI Tracking Benchmark.

Kay1794 commented 4 years ago

Hi @ZwwWayne

Here is the screenshot below the Processing Results for KITTI Tracking Benchmark. image

I checked the code and found that if we use 2D detector, we need to generate sampled point cloud and save it to velodyne_reduced folder. Since I didn't see the data preparation instruction for this part so I am not sure if it would be the problem. image I have another question is that where I can find the model that the paper used for 2D detecctor (the one you used for KITTI benchmark)

Thank you for your time and Happy Chinese New Year!

ZwwWayne commented 4 years ago
  1. This is weird, I need to check it further.
  2. The codebase has been modified to directly use the point cloud data and reduce it in data pre-processing [here[(https://github.com/ZwwWayne/mmMOT/blob/master/point_cloud/preprocess.py#L64), this should not be a problem since new code has been tested.
  3. We use this repo to train a detector on the 3D detection dataset and produce the results of tracking data. To do that, the format of tracking data should be converted to the 3D detection format first, you need to modify the scripts in the second.pytorch repo a little bit.
dmatos2012 commented 4 years ago

@Kay1794 were you able to solve this issue?