Closed boringtaskai closed 10 months ago
24GB vram was suggested
Edit: Just finish a run on inference.py which peak at 20.6GB VRAM
So there is no difference between inference.py and demo.py for GPU memory requirement?
How to resize batch_size so it can run on the small amount of GPU memory, the tradeoff is a little bit longer to process? (I cannot find the batch_size configuration in the anytext_sd15.yaml)
So there is no difference between inference.py and demo.py for GPU memory requirement?
How to resize batch_size so it can run on the small amount of GPU memory, the tradeoff is a little bit longer to process? (I cannot find the batch_size configuration in the anytext_sd15.yaml)
Hi,
in inference.py, you can reduce image_count
in params:
params = {
"show_debug": True,
"image_count": 2, # <---- batch size
"ddim_steps": 20,
}
in demo.py, the Image Count
is in Parameters
.
BTW, I am trying FP16 inference and it seems to have little difference in performance. It is possible that the GPU memory requirement can be reduced to below 10GB. I will push the updated code later.
So there is no difference between inference.py and demo.py for GPU memory requirement? How to resize batch_size so it can run on the small amount of GPU memory, the tradeoff is a little bit longer to process? (I cannot find the batch_size configuration in the anytext_sd15.yaml)
Hi, in inference.py, you can reduce
image_count
in params:params = { "show_debug": True, "image_count": 2, # <---- batch size "ddim_steps": 20, }
in demo.py, the
Image Count
is inParameters
.BTW, I am trying FP16 inference and it seems to have little difference in performance. It is possible that the GPU memory requirement can be reduced to below 10GB. I will push the updated code later.
Cool I cannot wait to see the update 😉
Thank you @tyxsspa , I fetch the FP16 commit, and now I success to run demo.py on my 8GB GPU. 🥰🙏
I tried to run inference, but failed 'CUDA out of memory', I'm using 8GB RTX 3080 TI, does it work?