yihongXU / TransCenter

This is the official implementation of TransCenter (TPAMI). The code and pretrained models are now available here: https://gitlab.inria.fr/yixu/TransCenter_official.
https://team.inria.fr/robotlearn/transcenter-transformers-with-dense-queriesfor-multiple-object-tracking/
Other
108 stars 7 forks source link

What is the graphic memory size during training? and how long did the training process take? #16

Closed Bian-666 closed 1 year ago

Bian-666 commented 1 year ago

I'm interested in your work. but i only have a single rtx3090 machine.

yihongXU commented 1 year ago

Hi, Thank you for your interest in our work! With the efficient version, you can train the model with rtx3090 with a batch size of 2, we were using RTX Titan with the same memory size, it did work (torch.backends.cudnn.benchmark=True can further reduce the memeory footprint for some gpus). If you want to have a bigger batch size, maybe try with gradient accumulation .

Another solution would be use another PVT_v2 backbone. We used B2 for TransCenter(-Dual) and B0 for TransCenter-Lite. You have other options from B0 to B5.

Have fun!

Bian-666 commented 1 year ago

Thank you for your prompt reply,thanks for your great work!