Closed zhibotian closed 11 months ago
One possibility would be to lower h=4096 to a lower number to reduce the total parameter count and get things to fit on your gpu. Another would be to train solely the retrieval pipeline and not the full retrieval + diffusion prior. We haven't tried adapting the code at all to work on lower specs than 1 A100 40GB gpu.
To run the Reconstruction notebook needed around 28GB of GPU RAM.
Thanks.
Hello, I met the
torch. Cuda. OutOfMemoryError: cuda out of memory.
For this problem, my GPU is RTX 3090 with 24G memory, but the above error still occurs when I try to reduce the batchsize to2
. May I ask how to solve this problem and run the code normally.