0xbitches / sd-webui-lcm

Latent Consistency Model for AUTOMATIC1111 Stable Diffusion WebUI
MIT License
614 stars 43 forks source link

TypeError: slice indices must be integers or None or have an __index__ method #30

Closed AngelTs closed 10 months ago

AngelTs commented 11 months ago

diffusers 0.23.0, transformers 4.30.2, accelerate 0.21.0 peft 0.6.1, xformers 0.0.22.post7, torch 2.1.0+cu121, torchvision 0.16.0+cu121, torchaudio 2.1.0+cu121

Startup time: 106.8s (prepare environment: 84.6s, import torch: 9.8s, import gradio: 2.1s, setup paths: 2.2s, initialize shared: 0.5s, other imports: 1.4s, setup codeformer: 0.3s, load scripts: 3.4s, create ui: 0.9s, gradio launch: 1.2s, add APIs: 0.3s). Applying attention optimization: xformers... done. Model loaded in 6.9s (load weights from disk: 1.5s, create model: 3.4s, apply weights to model: 0.4s, apply half(): 0.3s, move model to device: 0.1s, load textual inversion embeddings: 0.5s, calculate empty prompt: 0.6s). C:\AUTOMATIC1111\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py:749: FutureWarning: torch_dtype is deprecated and will be removed in version 0.25.0. deprecate("torch_dtype", "0.25.0", "") C:\AUTOMATIC1111\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py:752: FutureWarning: torch_device is deprecated and will be removed in version 0.25.0. deprecate("torch_device", "0.25.0", "") Traceback (most recent call last): File "C:\AUTOMATIC1111\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "C:\AUTOMATIC1111\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "C:\AUTOMATIC1111\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "C:\AUTOMATIC1111\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\AUTOMATIC1111\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "C:\AUTOMATIC1111\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "C:\AUTOMATIC1111\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(args, kwargs) File "C:\AUTOMATIC1111\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(*args, *kwargs) File "C:\AUTOMATIC1111\extensions\sd-webui-lcm\scripts\main.py", line 172, in generate_i2i result = pipe( File "C:\AUTOMATIC1111\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "C:\AUTOMATIC1111\extensions\sd-webui-lcm\lcm\lcm_i2i_pipeline.py", line 305, in call self.scheduler.set_timesteps(strength, num_inference_steps, original_inference_steps) File "C:\AUTOMATIC1111\venv\lib\site-packages\diffusers\schedulers\scheduling_lcm.py", line 382, in set_timesteps timesteps = lcm_origin_timesteps[::-skipping_step][:num_inference_steps] TypeError: slice indices must be integers or None or have an index method

AngelTs commented 11 months ago

The same error occures and in Google Colab standalone sd-webui-lcm txt2img img2img vid2vid

Websteria commented 10 months ago

This is happening to me too. Img2Img works but that's all.

zono50 commented 10 months ago

Same. Having the same issue when i do vid2vid or img2img

x-ili-x commented 10 months ago

same...

Loading pipeline components...: 100%|█████████████████████████████████████████████████| 6/6 [00:00<00:00, 11.03steps/s] Traceback (most recent call last): File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(args, *kwargs) File "E:\ai\stable-diffusion-webui\extensions\sd-webui-lcm\scripts\main.py", line 291, in generate_v2v result = pipe( File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, **kwargs) File "E:\ai\stable-diffusion-webui\extensions\sd-webui-lcm\lcm\lcm_i2i_pipeline.py", line 305, in call self.scheduler.set_timesteps(strength, num_inference_steps, original_inference_steps) File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\diffusers\schedulers\scheduling_lcm.py", line 382, in set_timesteps timesteps = lcm_origin_timesteps[::-skipping_step][:num_inference_steps] TypeError: slice indices must be integers or None or have an index method

Gonoshift commented 10 months ago

Same here

camenduru commented 10 months ago

thanks for the info ❤ please try again (colab)

https://github.com/camenduru/latent-consistency-model-colab/issues/2

AngelTs commented 10 months ago

thanks for the info ❤ please try again (colab)

camenduru/latent-consistency-model-colab#2

Can you said what you do? Do you chnage the line 382 in "E:\ai\stable-diffusion-webui\venv\lib\site-packages\diffusers\schedulers\scheduling_lcm.py"? Thanks

camenduru commented 10 months ago

https://github.com/0xbitches/sd-webui-lcm/commit/b00dd7203dfbdaf87d24b29ded95f0f6c51388ff

AngelTs commented 10 months ago

b00dd72

Works perfect. Thanks man!

906051999 commented 10 months ago

b00dd72

I used this modifier and successfully ran LCM img2img