Hongbin98 / MonoTTA

Code release for the ECCV 2024 paper 'Fully Test-Time Adaptation for Monocular 3D Object Detection'
MIT License
10 stars 1 forks source link

Training problems with the code #2

Closed ZZ2490 closed 3 weeks ago

ZZ2490 commented 3 weeks ago

Is the code provided now only for evaluation and not for training?

Hongbin98 commented 3 weeks ago

Thanks for your attention to our work~ We have provided the training cmd CUDA_VISIBLE_DEVICES=0 python tools/tta_monotta.py --config runs/monoflex.yaml --ckpt model_moderate_best_soft.pth --eval --output kitti-c/gaussian1 of MonoTTA. If you have any questions, please feel free to ask.

ZZ2490 commented 3 weeks ago

Running the above commands only evaluates, it does not train image

Hongbin98 commented 3 weeks ago

MonoTTA follows a new adaptation paradigm termed Fully Test-time Adaptation which aims to adapt a well-trained model to unlabeled test data by handling potential data distribution shifts at test time. So MonoTTA would adapt the well-trained model at the inference stage (before providing the final prediction). If you are still confused, you may check the related work (Fully Test-time Adaptation methods) in classification tasks~