DerrickXuNu / OpenCOOD

[ICRA 2022] An opensource framework for cooperative detection. Official implementation for OPV2V.
https://mobility-lab.seas.ucla.edu/opv2v/
Other
663 stars 99 forks source link

Yaml configuration for training where2comm model on OPV2V dataset #73

Closed ry4nzhu closed 1 year ago

ry4nzhu commented 1 year ago

Hi,

I'm trying to train a Where2comm model to evaluate the OPV2V LiDAR track (as I have not seen it in the model zoo). I am using the same yaml config at opencood/hypes_yaml/point_pillar_where2comm.yaml and trained 50 epochs. However, I can only get 0.72 AP@0.7 during testing. While I know that this config is probably used for evaluating V2XSet LiDAR-Track, I wonder if you have insights on training where2comm model for OPV2V LiDAR evaluation.

Thanks.

DerrickXuNu commented 1 year ago

Hi,

I think you need to remove the shrink_head or at least don't let it downscale the feature map. Try it and we can further discuss.

ry4nzhu commented 1 year ago

I remove shrink_header (https://github.com/DerrickXuNu/OpenCOOD/blob/236022c5e05ecdb94e6b039abdcd679021bc31c7/opencood/hypes_yaml/point_pillar_where2comm.yaml#L93) from opencood/hypes_yaml/point_pillar_where2comm.yaml and change head_dim: from 256 to 384. The trained model still get only 0.76 AP@0.7.

Are there any other parameters I need to take care?

DerrickXuNu commented 1 year ago

emmm...the original Where2comm implementation has both attention based fusion and maxout fusion, but here I only implement the attention based. Can you try maxout?

ultrazhl98 commented 1 year ago

I think maybe the code don't use pairwise_t_matrix to transform shared features to ego vehicle's pose