Fanghua-Yu / SUPIR

SUPIR aims at developing Practical Algorithms for Photo-Realistic Image Restoration In the Wild. Our new online demo is also released at suppixel.ai.
http://supir.xpixel.group/
Other
4.4k stars 385 forks source link

Possibility of decreasing VRAM usage? #28

Open FluffyDiscord opened 8 months ago

FluffyDiscord commented 8 months ago

Would it be possible to decrease the VRAM usage using chunking or other split-batch processing methods? Would be nice to be able to run these models on consumer grade graphic cards having 16-24GB of VRAM.

FurkanGozukara commented 8 months ago

100%. currently it requires 32GB VRAM. I made a tutorial editing it but on runpod with A6000 gpu

Fanghua-Yu commented 8 months ago

Try these options to reduce VRAM cost:

--loading_half_params --use_tile_vae --load_8bit_llava

It reduce VRAM costs to ~12G for diffusion, ~16G for llava. It seems to work, although I haven't tested it systematically yet :)

FluffyDiscord commented 8 months ago

I have tried the parameters above and my PC froze. Not sure if I dont have enough RAM (32gb) or VRAM (24gb). What are the RAM requirements?

Fanghua-Yu commented 8 months ago

--loading_half_params takes about 3-5 min on my service, not sure if you are stuck here. FYI, these setting cost about 40G RAM, maybe you can try --no_llava first.

FluffyDiscord commented 8 months ago

Oh, so I just need to get more RAM. Thats trivial to adding more VRAM :)

FurkanGozukara commented 8 months ago

--loading_half_params takes about 3-5 min on my service, not sure if you are stuck here. FYI, these setting cost about 40G RAM, maybe you can try --no_llava first.

I already removed the llava :D

also gradio demo already using half weights here the gradio i prepared

also just published the tutorial video too

63.) Free - Local - PC - RunPod

SUPIR: New SOTA Open Source Image Upscaler & Enhancer Model Better Than Magnific & Topaz AI Tutorial

image

screencapture-127-0-0-1-7860-2024-02-27-05_02_21

zelenooki87 commented 8 months ago

Do not share your patreon stuff. Do not want to subscribe. please distribute it for free as the authours of this project!

FurkanGozukara commented 8 months ago

new update is just mind blowing

now works even at 12 GB i tested on my local RTX 3060

thank you so much authors for this model. it is many times better than that very expensive Magnific AI

I updated my scripts to V7 for new VRAM optimizations

1 click install for Windows and RunPod - thus Linux

I fixed all the dependency issues, works on Python 3.10 with VENV

https://www.patreon.com/posts/supir-1-click-99176057

image

zelenooki87 commented 8 months ago

--loading_half_params takes about 3-5 min on my service, not sure if you are stuck here. FYI, these setting cost about 40G RAM, maybe you can try --no_llava first.

I am able to run it with 32GB RAM and 24VRAM on Windows. Could you UPDATE test.py file for batch processing with --loading_half_params --use_tile_vae --no_llava ? Thanks

nehemiahgo commented 8 months ago

--loading_half_params takes about 3-5 min on my service, not sure if you are stuck here. FYI, these setting cost about 40G RAM, maybe you can try --no_llava first.

I am able to run it with 32GB RAM and 24VRAM on Windows. Could you UPDATE test.py file for batch processing with --loading_half_params --use_tile_vae --no_llava ? Thanks

So @zelenooki87, you are currently running it successfully with your hardware specs and those two added parameters only?