DerrickXuNu / OpenCOOD

[ICRA 2022] An opensource framework for cooperative detection. Official implementation for OPV2V.
https://mobility-lab.seas.ucla.edu/opv2v/
Other
644 stars 99 forks source link

V2XSet,V2V4Real #113

Closed lubin2022 closed 9 months ago

lubin2022 commented 10 months ago

hello,thanks for your excellent work. If I want to use OpenCOOD to train all models on V2XSet and V2V4Real datasets, is it possible to simply just change the 'root_dir' and 'validate_dir' in the correspoding yaml file from 'opv2v' to 'v2xset' or 'v2v4real', Do I need to do anything else besides this? Looking forward to your reply.

DerrickXuNu commented 9 months ago

Thanks for using opencood. There is a difference between V2V4Real and the simulation dataset, which is the pose formulation. OPv2v and v2xset use a 3 element list to represent the absolute coordinates in the carla world, while v2v4real uses a 4x4 transformation matrix from current pose to map origin. So directly mixing them together won't work. However, I have a branch that can take these two types of datasets together: https://github.com/ucla-mobility/V2V4Real/tree/feature/mixed_training.