EvelynFan / FaceFormer

[CVPR 2022] FaceFormer: Speech-Driven 3D Facial Animation with Transformers
MIT License
783 stars 133 forks source link

How to solve the problem of GPU memory overflow? #2

Closed Owen1234560 closed 2 years ago

Owen1234560 commented 2 years ago

When using the long voice drive, it is found that the GPU memory has been rising. My maximum GPU memory is 8G, the voice duration is 23s, and the GPU memory is not enough.

EvelynFan commented 2 years ago

When using the long voice drive, it is found that the GPU memory has been rising. My maximum GPU memory is 8G, the voice duration is 23s, and the GPU memory is not enough.

The self-attention uses quadratic memory with respect to the sequence length. When testing a sequence that is longer than 20 seconds, you may change the argument "--device" to "cpu" if you have a higher maximum RAM limit.

Owen1234560 commented 2 years ago

Thanks. I add "with torch.no_grad()" during the inference. And the problem is disappeared. You can also try this. with torch.no_grad(): test_model(args) render_sequence(args) I did not found "torch.no_grad" in the code demo.py. But I found this in the main.py.