XiYe20 / VPTR

The repository for paper VPTR: Efficient Transformers for Video Prediction
MIT License
88 stars 19 forks source link

显卡 #14

Open wenyufeng936 opened 1 day ago

wenyufeng936 commented 1 day ago

image 您好,论文中有提到使用的是3090,为何使用V100-32G的显卡运行显示显存不够,这篇论文实验是用3090跑的吗?

XiYe20 commented 1 day ago

Hi, thank you very much for your interest in our work. Here we denote that the specific experiment for inference/training time measurement is conducted on a RTX3090 GPU with a small batch size. If you are trying to train the model, the auto encoder can be trained with a single RTX3090 within reasonable time. For the Transformer predictor, we recommend you to use the DDP training script, and set the batch size based on the number of GPUs/Memory you have. And four V100-32G GPUs would be good enough.