Closed AngelTs closed 10 months ago
The same error occures and in Google Colab standalone sd-webui-lcm txt2img img2img vid2vid
This is happening to me too. Img2Img works but that's all.
Same. Having the same issue when i do vid2vid or img2img
same...
Loading pipeline components...: 100%|█████████████████████████████████████████████████| 6/6 [00:00<00:00, 11.03steps/s] Traceback (most recent call last): File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(args, *kwargs) File "E:\ai\stable-diffusion-webui\extensions\sd-webui-lcm\scripts\main.py", line 291, in generate_v2v result = pipe( File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, **kwargs) File "E:\ai\stable-diffusion-webui\extensions\sd-webui-lcm\lcm\lcm_i2i_pipeline.py", line 305, in call self.scheduler.set_timesteps(strength, num_inference_steps, original_inference_steps) File "E:\ai\stable-diffusion-webui\venv\lib\site-packages\diffusers\schedulers\scheduling_lcm.py", line 382, in set_timesteps timesteps = lcm_origin_timesteps[::-skipping_step][:num_inference_steps] TypeError: slice indices must be integers or None or have an index method
Same here
thanks for the info ❤ please try again (colab)
https://github.com/camenduru/latent-consistency-model-colab/issues/2
thanks for the info ❤ please try again (colab)
Can you said what you do? Do you chnage the line 382 in "E:\ai\stable-diffusion-webui\venv\lib\site-packages\diffusers\schedulers\scheduling_lcm.py"? Thanks
diffusers 0.23.0, transformers 4.30.2, accelerate 0.21.0 peft 0.6.1, xformers 0.0.22.post7, torch 2.1.0+cu121, torchvision 0.16.0+cu121, torchaudio 2.1.0+cu121
Startup time: 106.8s (prepare environment: 84.6s, import torch: 9.8s, import gradio: 2.1s, setup paths: 2.2s, initialize shared: 0.5s, other imports: 1.4s, setup codeformer: 0.3s, load scripts: 3.4s, create ui: 0.9s, gradio launch: 1.2s, add APIs: 0.3s). Applying attention optimization: xformers... done. Model loaded in 6.9s (load weights from disk: 1.5s, create model: 3.4s, apply weights to model: 0.4s, apply half(): 0.3s, move model to device: 0.1s, load textual inversion embeddings: 0.5s, calculate empty prompt: 0.6s). C:\AUTOMATIC1111\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py:749: FutureWarning:
torch_dtype
is deprecated and will be removed in version 0.25.0. deprecate("torch_dtype", "0.25.0", "") C:\AUTOMATIC1111\venv\lib\site-packages\diffusers\pipelines\pipeline_utils.py:752: FutureWarning:torch_device
is deprecated and will be removed in version 0.25.0. deprecate("torch_device", "0.25.0", "") Traceback (most recent call last): File "C:\AUTOMATIC1111\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict output = await app.get_blocks().process_api( File "C:\AUTOMATIC1111\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api result = await self.call_function( File "C:\AUTOMATIC1111\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function prediction = await anyio.to_thread.run_sync( File "C:\AUTOMATIC1111\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\AUTOMATIC1111\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "C:\AUTOMATIC1111\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "C:\AUTOMATIC1111\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(args, kwargs) File "C:\AUTOMATIC1111\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper response = f(*args, *kwargs) File "C:\AUTOMATIC1111\extensions\sd-webui-lcm\scripts\main.py", line 172, in generate_i2i result = pipe( File "C:\AUTOMATIC1111\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "C:\AUTOMATIC1111\extensions\sd-webui-lcm\lcm\lcm_i2i_pipeline.py", line 305, in call self.scheduler.set_timesteps(strength, num_inference_steps, original_inference_steps) File "C:\AUTOMATIC1111\venv\lib\site-packages\diffusers\schedulers\scheduling_lcm.py", line 382, in set_timesteps timesteps = lcm_origin_timesteps[::-skipping_step][:num_inference_steps] TypeError: slice indices must be integers or None or have an index method