yifanlu0227 / HEAL

[ICLR2024] HEAL: An Extensible Framework for Open Heterogeneous Collaborative Perception ➡️ All You Need for Multi-Modality Collaborative Perception!
Other
132 stars 7 forks source link

Is the complemented annotation used for the visual detection performance on the DAIR-V2X dataset in the paper? #1

Closed wangsh0111 closed 4 months ago

wangsh0111 commented 5 months ago

Hi~ Congratulations on this work being accepted by ICLR, thank you for providing a complete collaboration perception framework. I noticed that the default label value read by DAIR-V2X in image mode is cav_content['params']['vehicles_front']. Is the complemented annotation used for the visual detection performance on the DAIR-V2X dataset in the paper? image

yifanlu0227 commented 5 months ago

No. It is the original label since there is only one front camera in DAIR-V2X dataset

yifanlu0227 commented 5 months ago

camera based detection should use cav_content['params']['vehicles_front'], while LiDAR-based detection should use cav_content['params']['vehicles_all'],

wangsh0111 commented 5 months ago

okay, thanks~