Closed ry4nzhu closed 1 year ago
Hi,
I think you need to remove the shrink_head or at least don't let it downscale the feature map. Try it and we can further discuss.
I remove shrink_header (https://github.com/DerrickXuNu/OpenCOOD/blob/236022c5e05ecdb94e6b039abdcd679021bc31c7/opencood/hypes_yaml/point_pillar_where2comm.yaml#L93) from opencood/hypes_yaml/point_pillar_where2comm.yaml
and change head_dim:
from 256 to 384. The trained model still get only 0.76 AP@0.7.
Are there any other parameters I need to take care?
emmm...the original Where2comm implementation has both attention based fusion and maxout fusion, but here I only implement the attention based. Can you try maxout?
I think maybe the code don't use pairwise_t_matrix to transform shared features to ego vehicle's pose
Hi,
I'm trying to train a Where2comm model to evaluate the OPV2V LiDAR track (as I have not seen it in the model zoo). I am using the same yaml config at
opencood/hypes_yaml/point_pillar_where2comm.yaml
and trained 50 epochs. However, I can only get 0.72 AP@0.7 during testing. While I know that this config is probably used for evaluating V2XSet LiDAR-Track, I wonder if you have insights on training where2comm model for OPV2V LiDAR evaluation.Thanks.