Closed NEONFIVE closed 2 years ago
Thanks!
Unfortunately, the released version of the code is very memory-hungry. We are running most of our tests on RTX A6000 cards (with 48 GB), or V100 cards on the servers (with 32 GB). All configs in the code base run without issues on those cards. On mid-range cards with 12-24 GB memory, I would recommend to reduce the batch size and/or the rendering resolution.
Thanks for the answers.
First, fantastic work.
Compute time on my 3080 is slow and i have many cuda out of memory errors.
Do you have any GPU recommendation here? Would a RTX A6000 help with the out of memory issues and fasten the process?