Open FluffyDiscord opened 8 months ago
100%. currently it requires 32GB VRAM. I made a tutorial editing it but on runpod with A6000 gpu
Try these options to reduce VRAM cost:
--loading_half_params --use_tile_vae --load_8bit_llava
It reduce VRAM costs to ~12G for diffusion, ~16G for llava. It seems to work, although I haven't tested it systematically yet :)
I have tried the parameters above and my PC froze. Not sure if I dont have enough RAM (32gb) or VRAM (24gb). What are the RAM requirements?
--loading_half_params
takes about 3-5 min on my service, not sure if you are stuck here.
FYI, these setting cost about 40G RAM, maybe you can try --no_llava
first.
Oh, so I just need to get more RAM. Thats trivial to adding more VRAM :)
--loading_half_params
takes about 3-5 min on my service, not sure if you are stuck here. FYI, these setting cost about 40G RAM, maybe you can try--no_llava
first.
I already removed the llava :D
also gradio demo already using half weights here the gradio i prepared
also just published the tutorial video too
63.) Free - Local - PC - RunPod
SUPIR: New SOTA Open Source Image Upscaler & Enhancer Model Better Than Magnific & Topaz AI Tutorial
Do not share your patreon stuff. Do not want to subscribe. please distribute it for free as the authours of this project!
new update is just mind blowing
now works even at 12 GB i tested on my local RTX 3060
thank you so much authors for this model. it is many times better than that very expensive Magnific AI
I updated my scripts to V7 for new VRAM optimizations
1 click install for Windows and RunPod - thus Linux
I fixed all the dependency issues, works on Python 3.10 with VENV
--loading_half_params
takes about 3-5 min on my service, not sure if you are stuck here. FYI, these setting cost about 40G RAM, maybe you can try--no_llava
first.
I am able to run it with 32GB RAM and 24VRAM on Windows. Could you UPDATE test.py file for batch processing with --loading_half_params --use_tile_vae --no_llava ? Thanks
--loading_half_params
takes about 3-5 min on my service, not sure if you are stuck here. FYI, these setting cost about 40G RAM, maybe you can try--no_llava
first.I am able to run it with 32GB RAM and 24VRAM on Windows. Could you UPDATE test.py file for batch processing with --loading_half_params --use_tile_vae --no_llava ? Thanks
So @zelenooki87, you are currently running it successfully with your hardware specs and those two added parameters only?
Would it be possible to decrease the VRAM usage using chunking or other split-batch processing methods? Would be nice to be able to run these models on consumer grade graphic cards having 16-24GB of VRAM.