tyxsspa / AnyText

Official implementation code of the paper <AnyText: Multilingual Visual Text Generation And Editing>
Apache License 2.0
4.31k stars 282 forks source link

How much GPU memory to run inference.py? #17

Closed boringtaskai closed 10 months ago

boringtaskai commented 10 months ago

I tried to run inference, but failed 'CUDA out of memory', I'm using 8GB RTX 3080 TI, does it work?

yhyu13 commented 10 months ago

24GB vram was suggested

Edit: Just finish a run on inference.py which peak at 20.6GB VRAM

boringtaskai commented 10 months ago

So there is no difference between inference.py and demo.py for GPU memory requirement?

How to resize batch_size so it can run on the small amount of GPU memory, the tradeoff is a little bit longer to process? (I cannot find the batch_size configuration in the anytext_sd15.yaml)

tyxsspa commented 10 months ago

So there is no difference between inference.py and demo.py for GPU memory requirement?

How to resize batch_size so it can run on the small amount of GPU memory, the tradeoff is a little bit longer to process? (I cannot find the batch_size configuration in the anytext_sd15.yaml)

Hi, in inference.py, you can reduce image_count in params:

params = {
    "show_debug": True,
    "image_count": 2,    # <---- batch size
    "ddim_steps": 20,
}

in demo.py, the Image Count is in Parameters.

BTW, I am trying FP16 inference and it seems to have little difference in performance. It is possible that the GPU memory requirement can be reduced to below 10GB. I will push the updated code later.

boringtaskai commented 10 months ago

So there is no difference between inference.py and demo.py for GPU memory requirement? How to resize batch_size so it can run on the small amount of GPU memory, the tradeoff is a little bit longer to process? (I cannot find the batch_size configuration in the anytext_sd15.yaml)

Hi, in inference.py, you can reduce image_count in params:

params = {
    "show_debug": True,
    "image_count": 2,    # <---- batch size
    "ddim_steps": 20,
}

in demo.py, the Image Count is in Parameters.

BTW, I am trying FP16 inference and it seems to have little difference in performance. It is possible that the GPU memory requirement can be reduced to below 10GB. I will push the updated code later.

Cool I cannot wait to see the update 😉

boringtaskai commented 10 months ago

Thank you @tyxsspa , I fetch the FP16 commit, and now I success to run demo.py on my 8GB GPU. 🥰🙏