Closed GeoffreyWu8 closed 2 years ago
Hi!
We used one V100 32GB GPU for testing. But indeed model takes about ~ 13Gb gpu memory. And most of it is just loaded weights in checkpoint (model, optimizer, etc) You can change the line
checkpoint = torch.load(config.resume)
to
checkpoint = torch.load(config.resume, map_location='cpu')
And gpu utilization becomes 5Gb. I guess it should work for one 2080Ti.
Hi Shvedova, Congratulations on your team's paper being accepted by CVPR. I have a slight question. What hardware resources do you need to run your model and data? Because I have three 2080Ti and it still doesn't work. The message is below: