lllyasviel / Omost

Your image is almost there!
Apache License 2.0
6.69k stars 401 forks source link

option to use local SDXL model file #80

Open dfl opened 3 weeks ago

dfl commented 3 weeks ago

uses SDXL_MODELS_DIR shell environment variable

shellddd commented 1 week ago

When I replace the model path with: base_model_dir = os.environ["D:\AGI\Stablle Diffusion\models\Stable-diffusion"] The terminal will report an error: using Local model file:D:\AGI\Stablle Diffusion\models\Stable-diffusionJuggernaut-X-RunDiffusion-NSFW.safetensors Traceback (most recent call last): File "E:\AI\Omost\gradio_app.py",line 56,in <module> pipe StableDiffusionXLPipeline.from_single_file( File "E:\AI\Omost\venv\lib\site-packages\huggingface_hub\utils\_validators.py",line 114,in _inner_fn return fn(*args,**kwargs) File "E:\AI\Omost\venv\lib\site-packages\diffusers\loaders\single_file.py",line 371,in from_single_file checkpoint load_single_file_checkpoint( File "E:\AI\Omost\venv\lib\site-packages\diffusers\loaders\single_file_utils.py",line 311,in load_single_file_checkpoint oint repo_id,weights_name =_extract_repo_id_and_weights_name(pretrained_model_link_or_path) File "E:\AI\Omost\venv\lib\site-packages\diffusers\loaders\single_file_utils.py",line 268,in _extract_repo_id_and_weights_name raise ValueError("Invalid 'pretrained_model_name_or_path'provided.Please set it to a valid URL.") ValueError:Invalid 'pretrained_model_name_or_path'provided.Please set it to a valid URL.

When I replace the model path with: `use_local_model = True sdxl_name = "Juggernaut-X-RunDiffusion-NSFW"

if use_local_model: try: base_model_dir = "D:\AGI\Stablle Diffusion\models\Stable-diffusion" Starting the terminal will report an error:using local model file:D:\AGI\Stablle Diffusion\models\Stable-diffusionJuggernaut-X-RunDiffusion-NSFW.safetensors Traceback (most recent call last): File "E:\AI\Omost\gradio_app.py",line 56,in pipe StableDiffusionXLPipeline.from_single_file( File "E:\AI\Omost\venv\lib\site-packages\huggingface_hub\utils_validators.py",line 114,in _inner_fn return fn(*args,**kwargs) File "E:\AI\Omost\venv\lib\site-packages\diffusers\loaders\single_file.py",line 371,in from_single_file checkpoint Load_single_file_checkpoint( File "E:\AI\Omost\venv\lib\site-packages\diffusers\loaders\single_file_utils.py",line 311,in load_single_file_checkpoint oint repo_id,weights_name _extract_repo_id_and_weights_name(pretrained_model_link_or_path) File "E:\AI\Omost\venv\lib\site-packages\diffusers\loaders\single_file_utils.py",line 268,in _extract_repo_id_and_weights_name raise ValueError("Invalid 'pretrained_model_name_or_path'provided.Please set it to a valid URL.") ValueError:Invalid 'pretrained_model_name_or_path'provided.Please set it to a valid URL.`

Could you tell me how to replace this path? Thank you

dfl commented 1 week ago

@shellddd does this file actually exist or is there a typo “Stablle Diffusion” (extra ‘l’) ?

shellddd commented 1 week ago

I tested it again, Yes, the model does exist, its path is what I copied directly, no errors

When I replace the path to: base_model_dir = os.environ["D:\AGI\Stablle Diffusion\models\Stable-diffusion"] it will prompt me that this path is not defined. I am not sure if I am doing it right?

dfl commented 1 week ago

I have a Mac not windows, but check here: https://stackoverflow.com/questions/2953834/how-should-i-write-a-windows-path-in-a-python-string-literal

shellddd commented 1 week ago

I replaced the path according to the python usage: base_model_dir = os.environ new_path = 'D:\AGI\Stablle Diffusion\models\Stable-diffusion' base_model_dir['PATH'] = new_path os.environ = base_model_dir

The terminal shows that the model has been correctly identified, but a new error is prompted:

using local model file:D:\AGI\Stablle Diffusion\models\Stable-diffusionJuggernaut-X-RunDiffusion-NSFW.safetensors Traceback (most recent call last): File "E:\AI\Omost\gradio_app.py",line 56,in <module> pipe StableDiffusionXLPipeline.from_single_file( File "E:\AI\Omost\venv\lib\site-packages\huggingface_hub\utils\_validators.py",line 114,in _inner_fn return fn(*args,**kwargs) File "E:\AI\Omost\venv\lib\site-packages\diffusers\Loaders\single_file.py",line 371,in from_single_file checkpoint load_single_file_checkpoint( File "E:\AI\Omost\venv\lib\site-packages\diffusers\loaders\single_file_utils.py",line 311,in load_single_file_checkpoint oint repo_id,weights_name _extract_repo_id_and_weights_name(pretrained_model_link_or_path) File "E:\AI\Omost\venv\lib\site-packages\diffusers\loaders\single_file_utils.py",line 268,in _extract_repo_id_and_weights_name raise ValueError("Invalid 'pretrained_model_name_or_path'provided.Please set it to a valid URL.") ValueError:Invalid 'pretrained_model_name_or_path'provided.Please set it to a valid URL.

homoluden commented 4 days ago

I was able to load local SDXL / Pony models. For those who gets the "ValueError:Invalid 'pretrained_model_name_or_path'provided.Please set it to a valid URL." error, check the sdxl_name. It should not contain the file extension (.safetensors etc). BUT... With this patch I'm getting another compatibility error. Default model is working OK. SDXL models in safetensors format doesn't (at least two used by me).

You shouldn't move a model that is dispatched using accelerate hooks.
Load to GPU: LlamaForCausalLM
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
User stopped generation
Last assistant response is not valid canvas: Response does not contain codes!
The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:128001 for open-end generation.
Automatically corrected [lightgoldenroyalloverde] -> [lightgoldenrodyellow].
Automatically corrected [papaywhrop] -> [papayawhip].
You shouldn't move a model that is dispatched using accelerate hooks.
Unload to CPU: LlamaForCausalLM
Load to GPU: CLIPTextModel
Load to GPU: CLIPTextModelWithProjection
Traceback (most recent call last):
  File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/gradio/queueing.py", line 528, in process_events
    response = await route_utils.call_process_api(
  File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/gradio/route_utils.py", line 270, in call_process_api
    output = await app.get_blocks().process_api(
  File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1908, in process_api
    result = await self.call_function(
  File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/gradio/blocks.py", line 1485, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
    return await future
  File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
    result = context.run(func, *args)
  File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/gradio/utils.py", line 808, in wrapper
    response = f(*args, **kwargs)
  File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/gradio_app.py", line 226, in diffusion_fn
    positive_cond, positive_pooler, negative_cond, negative_pooler = pipeline.all_conds_from_canvas(canvas_outputs, negative_prompt)
  File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/lib_omost/pipeline.py", line 313, in all_conds_from_canvas
    negative_cond, negative_pooler = self.encode_cropped_prompt_77tokens(negative_prompt)
  File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/media/dev/c8d75de9-08fe-4c66-9d34-357d8b7b4cd1/Omost/lib_omost/pipeline.py", line 354, in encode_cropped_prompt_77tokens
    pooled_prompt_embeds = prompt_embeds.pooler_output
AttributeError: 'CLIPTextModelOutput' object has no attribute 'pooler_output'
altoiddealer commented 7 hours ago

I was able to load local SDXL / Pony models. For those who gets the "ValueError:Invalid 'pretrained_model_name_or_path'provided.Please set it to a valid URL." error, check the sdxl_name. It should not contain the file extension (.safetensors etc). BUT... With this patch I'm getting another compatibility error. Default model is working OK. SDXL models in safetensors format doesn't (at least two used by me).

...

    pooled_prompt_embeds = prompt_embeds.pooler_output
AttributeError: 'CLIPTextModelOutput' object has no attribute 'pooler_output'

Reporting same error here.

In pipeline.py, in encode_cropped_prompt_77tokens() I added an attribute check...

            # Check for pooler_output attribute
            if hasattr(prompt_embeds, 'pooler_output'):
                pooled_prompt_embeds = prompt_embeds.pooler_output
            else:
                print("pooler_output attribute not found in prompt_embeds")

I also printed the structure of the prompt embeds.

With the 2 local models that I've tried (epicrealismXL_v7FinalDestination and leosamsHelloworldXL_helloworldXL70), I can see that there are many prompt embeds that are iterated over which do not have the pooler_output attribute.

This seems to be the cause of the subsequent error that occurs just before it actually returns the image:

  File "C:\Users\Office\miniconda3\lib\site-packages\torch\nn\modules\linear.py", line 116, in forward
    return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (1x2304 and 2816x1280)