Closed davidvfx07 closed 1 year ago
I think I have solved the issue! I believe the problem was indeed VRAM limitations. By using a flag combo that used bellow 12GB VRAM, it didn't produce such an error. This is still strange because my 3080 mobile has 16GB GDDR6 so I'm not quite sure why I had to cap my VRAM usage at only 12GB.
Describe the bug
When I go to run
accelerate launch train_dreambooth.py
, I get passed the "Caching latents" step but immediately after, when steps is at 0%, I get this CUDA error and I don't know why. My GPU doesn't seem to be out of memory as can be seen when runningnvidia-smi
, my torch and cudatoolkit are up to date, and my xformers is up to date as well.Here is the error:
Reproduction
Running
accelerate launch train_dreambooth.py
Logs
No response
System Info
Windows 11 Pro RTX 3080 mobile 16GB Python 3.8.13
conda env export
diffusers-cli env
nvidia-smi
NVIDIA-SMI 522.30 Driver Version: 522.30 CUDA Version: 11.8