Open aptrn opened 1 year ago
I ran out of memory running it on Google Colab Nvidia Tesla T4's after about 7 epochs -- so switching to A100's with high memory to see if that works.
It seems that in WSL the error message is not always explicit (most probably OOM in your case). But you can check the GPU memory usage in task manager.
Hello,
I'm trying to run stable-dreamfusion on windows via Docker using WSL2, runnin on a RTX 3070Ti 8Gb. The issue is, using any kind of script without the "--test" flag, I always get to a "RuntimeError: CUDA error: unknown error".
Here's the output of
nvidia-smi
:This is the command I used to test:
CUDA_LAUNCH_BLOCKING=1 python3 main.py --text "a hamburger" --workspace trial --fp16 --save_mesh
And here's the output:
I've read around that I need 12Gb of VRAM to use this but wanted to try it anyway. The error doesn't seem to explicitly say that I need more VRAM so I'm asking here to have a double check by more expert eyes than mine.