0xbitches / sd-webui-lcm

Latent Consistency Model for AUTOMATIC1111 Stable Diffusion WebUI
MIT License
614 stars 43 forks source link

LCM not in Auto1111 tabs #8

Open JelloWizard opened 9 months ago

JelloWizard commented 9 months ago

i installed and restarted UI and tried all i can think of.

DreamLoveBetty commented 9 months ago

I also encountered the same problem, normal installation, the TAB did not display "LCM ", go to the extension directory to execute" python install.py", enter the install.py file found that only need to install "diffusers" module, Manually "pip install diffusers" The installation process is normal, restart the SD-WEBUI, still do not see "LCM"

cyco-creates commented 9 months ago

Same here! Can't see LCM anywhere.

nodegraphics commented 9 months ago

i do have the same problem.

0xbitches commented 9 months ago

@JelloWizard @DreamLoveBetty @cyco-creates @gnlkrmz Please provide the console error log. This is not enough information for me to debug.

nodegraphics commented 9 months ago

@0xbitches very true. including below;

`Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)] Version: 1.6.0 Commit hash: Launching Web UI with arguments: --medvram --medvram-sdxl --xformers --api --skip-python-version-check Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu. [-] ADetailer initialized. version: 23.10.1, num models: 9 [AddNet] Updating model hashes... 100%|##########| 66/66 [00:00<00:00, 7333.28it/s] [AddNet] Updating model hashes... 100%|##########| 66/66 [00:00<00:00, 5999.13it/s] 2023-10-23 22:20:30,380 - ControlNet - INFO - ControlNet v1.1.411 ControlNet preprocessor location: A:\Program Files\StabilityMatrix\Data\Packages\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads 2023-10-23 22:20:30,496 - ControlNet - INFO - ControlNet v1.1.411 Loading pipeline components...: 33%|###3 | 2/6 [00:00<00:00, 20.63it/s] * Error loading script: main.py Traceback (most recent call last): File "A:\Program Files\StabilityMatrix\Data\Packages\stable-diffusion-webui\modules\scripts.py", line 382, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "A:\Program Files\StabilityMatrix\Data\Packages\stable-diffusion-webui\modules\script_loading.py", line 10, in load_module module_spec.loader.exec_module(module) File "", line 883, in exec_module File "", line 241, in _call_with_frames_removed File "A:\Program Files\StabilityMatrix\Data\Packages\stable-diffusion-webui\extensions\sd-webui-lcm\scripts\main.py", line 71, in pipe = LatentConsistencyModelPipeline.from_pretrained( File "A:\Program Files\StabilityMatrix\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1105, in from_pretrained loaded_sub_model = load_sub_model( File "A:\Program Files\StabilityMatrix\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 472, in load_sub_model loaded_sub_model = load_method(os.path.join(cached_folder, name), loading_kwargs) File "A:\Program Files\StabilityMatrix\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 1825, in from_pretrained return cls._from_pretrained( File "A:\Program Files\StabilityMatrix\Data\Packages\stable-diffusion-webui\venv\lib\site-packages\transformers\tokenization_utils_base.py", line 2044, in _from_pretrained raise ValueError( ValueError: Non-consecutive added token '<|startoftext|>' found. Should have index 49408 but has index 49406 in saved vocabulary.


Loading weights [1a189f0be6] from A:\Program Files\StabilityMatrix\Data\Packages\stable-diffusion-webui\models\Stable-diffusion\v1-5-pruned.safetensors A:\Program Files\StabilityMatrix\Data\Packages\stable-diffusion-webui\extensions--sd-webui-ar-plus\scripts\sd-webui-ar.py:448: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead. arc_calc_height = gr.Button(value="Calculate Height").style( A:\Program Files\StabilityMatrix\Data\Packages\stable-diffusion-webui\extensions--sd-webui-ar-plus\scripts\sd-webui-ar.py:448: GradioDeprecationWarning: Use scale in place of full_width in the constructor. scale=1 will make the button expand, whereas 0 will not. arc_calc_height = gr.Button(value="Calculate Height").style( A:\Program Files\StabilityMatrix\Data\Packages\stable-diffusion-webui\extensions--sd-webui-ar-plus\scripts\sd-webui-ar.py:456: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead. arc_calc_width = gr.Button(value="Calculate Width").style( A:\Program Files\StabilityMatrix\Data\Packages\stable-diffusion-webui\extensions--sd-webui-ar-plus\scripts\sd-webui-ar.py:456: GradioDeprecationWarning: Use scale in place of full_width in the constructor. scale=1 will make the button expand, whereas 0 will not. arc_calc_width = gr.Button(value="Calculate Width").style( Creating model from config: A:\Program Files\StabilityMatrix\Data\Packages\stable-diffusion-webui\configs\v1-inference.yaml A:\Program Files\StabilityMatrix\Data\Packages\stable-diffusion-webui\extensions\sd-fast-pnginfo\scripts\fast-pnginfo.py:40: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead. with gr.Row().style(equal_height=False): A:\Program Files\StabilityMatrix\Data\Packages\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\metadata_editor.py:399: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead. with gr.Row().style(equal_height=False): A:\Program Files\StabilityMatrix\Data\Packages\stable-diffusion-webui\extensions\sd-webui-additional-networks\scripts\metadata_editor.py:521: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead. cover_image = gr.Image( Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). [Lobe]: Initializing Lobe Startup time: 14.5s (prepare environment: 5.2s, import torch: 2.7s, import gradio: 0.7s, setup paths: 0.6s, initialize shared: 0.2s, other imports: 0.5s, setup codeformer: 0.1s, load scripts: 3.1s, create ui: 0.9s, gradio launch: 0.3s). Applying attention optimization: xformers... done. No Image data blocks found. No Image data blocks found. Model loaded in 7.1s (load weights from disk: 0.4s, create model: 0.8s, apply weights to model: 2.5s, apply half(): 0.5s, load textual inversion embeddings: 0.7s, calculate empty prompt: 2.2s). No Image data blocks found. No Image data blocks found. No Image data blocks found. No Image data blocks found. No Image data blocks found. No Image data blocks found. No Image data blocks found. No Image data blocks found. `

AugmentedRealityCat commented 9 months ago

ValueError: Non-consecutive added token '<|startoftext|>' found. Should have index 49408 but has index 49406 in saved vocabulary.

See this: https://github.com/0xbitches/sd-webui-lcm#known-issues

ValueError: Non-consecutive added token '<|startoftext|>' found. Should have index 49408 but has index 49406 in saved vocabulary.

To resolve this, locate your huggingface hub cache directory.

It will be something like ~/.cache/huggingface/hub/path_to_lcm_dreamshaper_v7/tokenizer/. On Windows, it will roughly be C:\Users\YourUserName.cache\huggingface\hub\models--SimianLuo--LCM_Dreamshaper_v7\snapshots\c7f9b672c65a664af57d1de926819fd79cb26eb8\tokenizer.

Find the file added_tokens.json and change the contents to:

{
"<|endoftext|>": 49409,
"<|startoftext|>": 49408
}

or simply remove it.

nodegraphics commented 9 months ago

Thank you... worked 🙏

DreamLoveBetty commented 9 months ago

值错误:找到不连续添加的标记“<|开始文本|>”。应具有索引 49408,但在保存的词汇表中具有索引 49406。

看到这个: https://github.com/0xbitches/sd-webui-lcm#known-issues

值错误:找到不连续添加的标记“<|开始文本|>”。应具有索引 49408,但在保存的词汇表中具有索引 49406。 要解决此问题,请找到您的拥抱脸集线器缓存目录。 它将类似于~/.cache/huggingface/hub/path_to_lcm_dreamshaper_v7/tokenizer/。在Windows上,它大致是C:\Users\YourUserName.cache\huggingface\hub\models--SimianLuo--LCM_Dreamshaper_v7\snapshots\c7f9b672c65a664af57d1de926819fd79cb26eb8\tokenizer。 找到文件 added_tokens.json 并将内容更改为:

{
  "<|endoftext|>": 49409,
  "<|startoftext|>": 49408
}

或者干脆删除它。

Mine also indicated this error, but the file was not on the system user path, and I could not make it run by manually creating the file and copying the contents, including deleting the file.

DreamLoveBetty commented 9 months ago

The following is the error message in the background: The config attributes {'force_upcast': True} were passed to AutoencoderKL, but are not expected and will be ignored. Please verify your config.json configuration file. * Error loading script: main.py Traceback (most recent call last): File "F:\sd-webui-aki-v4.4\modules\scripts.py", line 382, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "F:\sd-webui-aki-v4.4\modules\script_loading.py", line 10, in load_module module_spec.loader.exec_module(module) File "", line 883, in exec_module File "", line 241, in _call_with_frames_removed File "F:\sd-webui-aki-v4.4\extensions\sd-webui-lcm\scripts\main.py", line 71, in pipe = LatentConsistencyModelPipeline.from_pretrained( File "F:\sd-webui-aki-v4.4\python\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1037, in from_pretrained loaded_sub_model = load_sub_model( File "F:\sd-webui-aki-v4.4\python\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 450, in load_sub_model loaded_sub_model = load_method(os.path.join(cached_folder, name), loading_kwargs) File "F:\sd-webui-aki-v4.4\python\lib\site-packages\transformers\tokenization_utils_base.py", line 1825, in from_pretrained return cls._from_pretrained( File "F:\sd-webui-aki-v4.4\python\lib\site-packages\transformers\tokenization_utils_base.py", line 2044, in _from_pretrained raise ValueError( ValueError: Non-consecutive added token '<|startoftext|>' found. Should have index 49408 but has index 49406 in saved vocabulary.

JelloWizard commented 9 months ago

ValueError: Non-consecutive added token '<|startoftext|>' found. Should have index 49408 but has index 49406 in saved vocabulary.

See this: https://github.com/0xbitches/sd-webui-lcm#known-issues

ValueError: Non-consecutive added token '<|startoftext|>' found. Should have index 49408 but has index 49406 in saved vocabulary. To resolve this, locate your huggingface hub cache directory. It will be something like ~/.cache/huggingface/hub/path_to_lcm_dreamshaper_v7/tokenizer/. On Windows, it will roughly be C:\Users\YourUserName.cache\huggingface\hub\models--SimianLuo--LCM_Dreamshaper_v7\snapshots\c7f9b672c65a664af57d1de926819fd79cb26eb8\tokenizer. Find the file added_tokens.json and change the contents to:

{
  "<|endoftext|>": 49409,
  "<|startoftext|>": 49408
}

or simply remove it.

i tried both and neither worked, there is still no LCM in my tabs

DreamLoveBetty commented 9 months ago

The problem has been solved with the help of friends, most of these problems occur in the packaged SD-Webui user, then he should modify the file path should be: X:\XXXXXXX.cache\huggingface\hub\models--SimianLuo--LCM_Dreamshaper_v7\snapshots\c7f9b672c65a664af57d1de926819fd79cb26e b8\tokenizer

where the X represents the drive letter and root directory where you placed the SD-Webui, not the user directory path where the C drive is located. Hope this helps. Good luck. 微信图片_20231025013009

misakitchi commented 9 months ago

The following is the error message in the background: The config attributes {'force_upcast': True} were passed to AutoencoderKL, but are not expected and will be ignored. Please verify your config.json configuration file. * Error loading script: main.py Traceback (most recent call last): File "F:\sd-webui-aki-v4.4\modules\scripts.py", line 382, in load_scripts script_module = script_loading.load_module(scriptfile.path) File "F:\sd-webui-aki-v4.4\modules\script_loading.py", line 10, in load_module module_spec.loader.exec_module(module) File "", line 883, in exec_module File "", line 241, in _call_with_frames_removed File "F:\sd-webui-aki-v4.4\extensions\sd-webui-lcm\scripts\main.py", line 71, in pipe = LatentConsistencyModelPipeline.from_pretrained( File "F:\sd-webui-aki-v4.4\python\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1037, in from_pretrained loaded_sub_model = load_sub_model( File "F:\sd-webui-aki-v4.4\python\lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 450, in load_sub_model loaded_sub_model = load_method(os.path.join(cached_folder, name), loading_kwargs) File "F:\sd-webui-aki-v4.4\python\lib\site-packages\transformers\tokenization_utils_base.py", line 1825, in from_pretrained return cls._from_pretrained( File "F:\sd-webui-aki-v4.4\python\lib\site-packages\transformers\tokenization_utils_base.py", line 2044, in _from_pretrained raise ValueError( ValueError: Non-consecutive added token '<|startoftext|>' found. Should have index 49408 but has index 49406 in saved vocabulary.

I don't understand what you say... My stable diffusion is in "C:\AI\sd-webui" and i have this "C:\Users\Paulo\.cache\huggingface\hub\models--SimianLuo--LCM_Dreamshaper_v7" I must move the ".cache" folder in root directory of sd-webui "C:\AI\sd-webui"? So i will have "C:\AI\sd-webui.cache\huggingface\hub\models--SimianLuo--LCM_Dreamshaper_v7"?

Note: the "\.cache" is replaced with ".cache" when i write the message...

cyco-creates commented 9 months ago

I tried the cache solution but it doesn't work. There are no error messages, Install goes smoothly. There is just no LCM tabs in my Automatic 1111 interface. I can see it in installed extensions, but there are no LCM tabs in my auto. None.

DreamLoveBetty commented 9 months ago

I tried the cache solution but it doesn't work. There are no error messages, Install goes smoothly. There is just no LCM tabs in my Automatic 1111 interface. I can see it in installed extensions, but there are no LCM tabs in my auto. None.

Hi, got your email, but I am not used to using email reply, sorry.

You can refer to my second reply record, after normal installation of "LCM" extension to \extensions\ sd-wewe-lcm, run "python install.py" in the directory, it will install the dependent modules required by the extension, if it can be installed normally, You can modify the files in the.cache cache directory. However, there is a high probability that you will encounter problems such as version conflicts or lack of "launch" dependent modules. If you cannot install these modules, you can directly edit the "install.py" file, directly install the "pip install diffusers" in it, and then modify the files in the ".cache "cache directory. Above is my installation process, I hope to help you.

greenhouse95 commented 9 months ago

I've done every single thing recommended, and it still won't show. According to the new readme it could be another extension force installing old diffusers. Yet I've removed every single extension, reinstalled the newest diffusers and it still doesn't show. And the console gives this error:

**Error loading script: main.py
Traceback (most recent call last):
  File "C:\AI Stable Diffusion\stable-diffusion-webui\modules\scripts.py", line 382, in load_scripts
    script_module = script_loading.load_module(scriptfile.path)
  File "C:\AI Stable Diffusion\stable-diffusion-webui\modules\script_loading.py", line 10, in load_module
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "C:\AI Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-lcm\scripts\main.py", line 7, in <module>
    from lcm.lcm_i2i_pipeline import LatentConsistencyModelImg2ImgPipeline
  File "C:\AI Stable Diffusion\stable-diffusion-webui\extensions\sd-webui-lcm\lcm\lcm_i2i_pipeline.py", line 28, in <module>
    from diffusers.image_processor import VaeImageProcessor, PipelineImageInput
ImportError: cannot import name 'PipelineImageInput' from 'diffusers.image_processor' (C:\AI Stable Diffusion\stable-diffusion-webui\venv\lib\site-packages\diffusers\image_processor.py)**
greenhouse95 commented 9 months ago

Ok, I finally got it working on my end. For me when running the "Activate.ps1" file, it would instantly close. So I assumed it was that way, and was enough to then upgrade the diffusers. Now that I stopped it from closing, I was able to properly upgrade the diffusers and it shows up on the tabs.

For those that had the same problem:

hyh813 commented 8 months ago

Ok, I finally got it working on my end. For me when running the "Activate.ps1" file, it would instantly close. So I assumed it was that way, and was enough to then upgrade the diffusers. Now that I stopped it from closing, I was able to properly upgrade the diffusers and it shows up on the tabs.好的,我终于让它在我这边工作了。对我来说,当运行“Activate.ps1”文件时,它会立即关闭。所以我认为是这样,然后升级扩散器就足够了。现在我阻止了它关闭,我能够正确升级扩散器,它显示在标签上。

For those that had the same problem:对于那些有同样问题的人:

  • Go to the "...\stable-diffusion-webui\venv\Scripts" folder. Shift+Right click on the background and open Powershell.转到“...\stable-diffusion-webui\venv\Scripts”文件夹。Shift+右键单击背景并打开 Powershell。
  • Type this: "PowerShell -NoExit .\Activate.ps1" without the "".键入以下内容:“PowerShell -NoExit .\Activate.ps1”,不带“”。
  • Then you can upgrade the diffusers properly with: "pip3 install --upgrade diffusers"然后,您可以使用以下命令正确升级扩散器:“pip3 install --upgrade diffusers”

Thanks bro, it works

thatjimupnorth commented 8 months ago
  • pip3 install --upgrade diffusers

This works exactly right. The "official" fix does nothing. This needs to be brough to the developer's attention.

zenphyl commented 8 months ago

Ok, I finally got it working on my end. For me when running the "Activate.ps1" file, it would instantly close. So I assumed it was that way, and was enough to then upgrade the diffusers. Now that I stopped it from closing, I was able to properly upgrade the diffusers and it shows up on the tabs.

For those that had the same problem:

  • Go to the "...\stable-diffusion-webui\venv\Scripts" folder. Shift+Right click on the background and open Powershell.
  • Type this: "PowerShell -NoExit .\Activate.ps1" without the "".
  • Then you can upgrade the diffusers properly with: "pip3 install --upgrade diffusers"

Worked amazingly for me, thank you very much. This should be in the main instructions, just add that "pip3 install --upgrade diffusers" should be typed in the same spot as "PowerShell -NoExit .\Activate.ps1" for total noob proof. It even optimized my entire auto1111, i had a conflict with lama-cleaner 1.2.5 that kept diffuser and transformers from updating, now i can render higher resolution SDXL images that yesterday failed immediately.