Closed awp4211 closed 4 years ago
Hi awp4211, if you create and run the toy experiment with the given default settings, the VRAM requirements should be satisfied with 12GB (it works on our machines with less than 12GB in that case). Have you tried enabling torch.backends.cudnn.benchmark = True at the beginning of training to optimize GPU-resource utilization?
When I run the code exec.py on the toy dataset generated by ./experiments/toy_exp, error occurs as: RuntimeError: CUDA error: out of memory
My computer is installed with a NVIDIA TITAN V with 12GB memory.
What is the hardware requirement of the code?