OS : Ubuntu-18.04 LTS (WSL2)
CUDA : 11.3
Pytorch : 1.12.1
GPU : 3090ti
I've had no issue training on other models like neus-facto, but when I try baskedsdf or bakedsdf-mlp it's saying that PyTorch is reserving almost every single byte of my GPU memory. Is this an actual bug or does this model require more than 24GBs?
OS : Ubuntu-18.04 LTS (WSL2) CUDA : 11.3 Pytorch : 1.12.1 GPU : 3090ti
I've had no issue training on other models like neus-facto, but when I try baskedsdf or bakedsdf-mlp it's saying that PyTorch is reserving almost every single byte of my GPU memory. Is this an actual bug or does this model require more than 24GBs?