openvinotoolkit / stable-diffusion-webui

Stable Diffusion web UI
GNU Affero General Public License v3.0
250 stars 39 forks source link

[Bug]: Lora not working #24

Closed KiwiState closed 10 months ago

KiwiState commented 10 months ago

Is there an existing issue for this?

What happened?

when using loras base 1.5 set to this model the program simply crashes.

Steps to reproduce the problem

use the model with the openvino script with Gpu and then use any lora

What should have happened?

he should have created the images with the parrots.

Version or Commit where the problem happens

1.5.1

What Python version are you running on ?

Python 3.10.x

What platforms do you use to access the UI ?

Windows

What device are you running WebUI on?

Other GPUs

Cross attention optimization

Automatic

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

set COMMANDLINE_ARGS=--skip-torch-cuda-test --precision full --no-half

List of extensions

No

Console logs

venv "G:\AI\stable-diffusion-webui\venv\Scripts\Python.exe"
fatal: No names found, cannot describe anything.
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.5.1
Commit hash: 434282272d43591e17f157954efe5869c7004c05
Launching Web UI with arguments: --skip-torch-cuda-test --precision full --no-half
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [e714ee20aa] from G:\AI\stable-diffusion-webui\models\Stable-diffusion\abyssorangemix2_Hard.safetensors
loading settings: JSONDecodeError
Traceback (most recent call last):
  File "G:\AI\stable-diffusion-webui\modules\ui_loadsave.py", line 26, in __init__
    self.ui_settings = self.read_from_file()
  File "G:\AI\stable-diffusion-webui\modules\ui_loadsave.py", line 117, in read_from_file
    return json.load(file)
  File "C:\Users\Vicente Gandolfo\AppData\Local\Programs\Python\Python310\lib\json\__init__.py", line 293, in load
    return loads(fp.read(),
  File "C:\Users\Vicente Gandolfo\AppData\Local\Programs\Python\Python310\lib\json\__init__.py", line 346, in loads
    return _default_decoder.decode(s)
  File "C:\Users\Vicente Gandolfo\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 337, in decode
    obj, end = self.raw_decode(s, idx=_w(s, 0).end())
  File "C:\Users\Vicente Gandolfo\AppData\Local\Programs\Python\Python310\lib\json\decoder.py", line 355, in raw_decode
    raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 139 column 74 (char 5979)

Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 14.4s (launcher: 0.7s, import torch: 5.9s, import gradio: 1.4s, setup paths: 1.3s, other imports: 1.3s, load scripts: 2.2s, create ui: 1.0s, gradio launch: 0.5s).
Creating model from config: G:\AI\stable-diffusion-webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying attention optimization: InvokeAI... done.
Model loaded in 12.5s (load weights from disk: 1.8s, create model: 0.6s, apply weights to model: 9.6s, calculate empty prompt: 0.4s).
Loading weights [e714ee20aa] from G:\AI\stable-diffusion-webui\models\Stable-diffusion\abyssorangemix2_Hard.safetensors
OpenVINO Script:  created model from config : G:\AI\stable-diffusion-webui\configs\v1-inference.yaml
In this conversion only the non-EMA weights are extracted. If you want to instead extract the EMA weights (usually better for inference), please make sure to add the `--extract_ema` flag.
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide to the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend to keep the safety filter enabled in all public facing circumstances, disabling it only for use-cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 .
  0%|                                                                                           | 0/20 [00:00<?, ?it/s]

Then it never respond again or crashes

Additional information

No response

xuexue49 commented 10 months ago

may be it's too slow to compile models containing lora,After I placed it in the background for a while, the model suddenly worked. image it toke 6 mins to compile. the other test image image

devangaggarwal commented 10 months ago

Currently, we are working on enabling support for LoRA models on GPU. We will update this issue once we have the support for LoRA.

qiacheng commented 10 months ago

https://github.com/openvinotoolkit/stable-diffusion-webui/pull/36 LoRa support is now merged to main branch.