Closed tkdtks123 closed 2 years ago
Thank you for reporting the issues!
The memory issue is due to a pytorch version issue in CenterNet2. The most recent CenterNet2 commit should fix that.
I have pushed the move_tao_keyframes.py
file. Sorry for the inconvenience!
Please let me know if you have other questions.
Best, Xingyi
Thanks for your fast reply.
I will try to downgrade my pytorch version. For the missing file, seems it is not pushed in the repo yet.
Thanks,
Sorry, added the missing file :)
No need to downgrade the pytorch version. You can git pull under third_party/CenterNet2
and use the most recent pytorch version. It's just our models are developed under torch 1.7 and cuda 9.2.
Best, Xingyi
Thanks for the clarification.
Hello, Thanks for sharing the source code of nice work!
I have tried the TAO training code (GTR_TAO_DR2101.yaml) but failed full training due to the memory overhead error. It seems the memory usage increases gradually during training, and reaches the max memory limit. As I am currently using A6000 with 48G gpu, it should be enough based on your training spec (4x 32G V100 gpu). Could you give any ideas? My initial solution is to reduce the video length 8 to 2.
Moreover, I cannot find the move_tao_keyframes.py file. Could you please provide this file?
Thanks,