Closed wuutiing closed 6 months ago
Dear wuutiing,
The fp32 optimization requires a minimum of V100, which is 32GB of memory.
To use 3090 inference, you can use fp16 via state.mprec='fp16'
:
python svgdreamer.py x=iconography skip_sive=True "prompt='Sydney opera house. oil painting. by Van Gogh'" result_path='./logs/SydneyOperaHouse' state.mprec='fp16' multirun=True
Best Regards, Ximing
Try running on 3090 with 24G vram but get cuda oom, want to know the least hardware (GPU) requirements.