lllyasviel / Omost

Your image is almost there!
Apache License 2.0
6.69k stars 401 forks source link

FP32 causes 8GB vram gpu out of memory error, RunDiffusion/Juggernaut-X-v10 FP16 is not supported #63

Open xhoxye opened 3 weeks ago

xhoxye commented 3 weeks ago

# SDXL sdxl_name = RunDiffusion/Juggernaut-X-v10 #FP16 is not supported # sdxl_name = SG161222/RealVisXL_V4.0 # sdxl_name = stabilityai/stable-diffusion-xl-base-1.0

QQ截图20240604070222

FP32 can be downloaded, however it causes 8GB vram gpu out of memory error

Duemellon commented 3 weeks ago

I see the syntax you used to change models but I'm not sure how to apply that to my own. Of course all my models are located in the A1111 folder somewhere else. What is the syntax? Is there a problem concern with the file extension?

xhoxye commented 3 weeks ago

@Duemellon It's not a .safetensors file name, it's the huggingface directory name https://huggingface.co/SG161222/RealVisXL_V4.0

Duemellon commented 3 weeks ago

So you can't point this to a local download of a file at this time? It can only do a download after you point to the HF link?

xhoxye commented 3 weeks ago

If you want to load the local .safetensors file, you will need to modify more code

Manul07 commented 3 weeks ago

no undesend

Duemellon commented 3 weeks ago

If you want to load the local .safetensors file, you will need to modify more code

yes, that's what would be needed. I have a terabyte or more of locally installed safetensor fiels that I could use instead of pointing to a a HF directory. I'm not familiar with PY at all to make these changes but that is a different topic than this thread at this point.

xhoxye commented 3 weeks ago
models_dir = os.path.join(os.getcwd(), 'models/checkpoints')

sdxl_name = 'RealVisXL_V4.0'

model_path = os.path.join(models_dir, sdxl_name + '.safetensors')

if os.path.isfile(model_path):
    temp_pipeline = StableDiffusionXLImg2ImgPipeline.from_single_file(model_path)

    tokenizer = temp_pipeline.tokenizer
    tokenizer_2 = temp_pipeline.tokenizer_2
    text_encoder = temp_pipeline.text_encoder
    text_encoder_2 = temp_pipeline.text_encoder_2
    text_encoder_2 = CLIPTextModel(config=text_encoder_2.config)
    vae = temp_pipeline.vae
    unet = temp_pipeline.unet
else:
    raise FileNotFoundError(f"Model file {model_path} not found.")  
xhoxye commented 3 weeks ago

QQ截图20240606095725 QQ截图20240606095718 QQ截图20240606095707 QQ截图20240606095646

QQ截图20240606095630

xhoxye commented 3 weeks ago

I'll submit a PR later

xhoxye commented 3 weeks ago

"torch_dtype=torch.float32" It run incorrectly

xhoxye commented 3 weeks ago

https://github.com/lllyasviel/Omost/pull/81