Closed SoftologyPro closed 10 months ago
which gpus are you using?
which gpus are you using?
Single 4090
maybe try this link: AUTOMATIC1111/stable-diffusion-webui#8965 (comment)
I want to run this "stand alone" outside Web UI. Just from the command line.
This error seems to have started once LCM support was added to the diffusers repo. Prior to this when diffusers was a sub folder it all worked fine.
Also, this is not just me. Another user reported the problem to me and I verified the same error and then raised this issue.
Yeah hi. As for me (the mentioned user) I have a 4080 and some integrated graphics on the CPU AMD Ryzen 9 7950X. Running the torch cuda get device count function returns only 1 tho, and it does work with other projects (like eg. Illusion Diffusion).
same issue
I think I've solved this issue, although I've solved the problem , I still have a question to ask @luosiallen
here is my solution below, and my question at the end of the texts.
I try to deploy app.py in Linux or windows and MacOS,all environment report this issue.
but I can run interfere by using ur sample code like this
from diffusers import DiffusionPipeline
import torch
pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7")
# To save GPU memory, torch.float16 can be used, but it may compromise image quality.
pipe.to(torch_device="cuda", torch_dtype=torch.float32)
prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
# Can be set to 1~50 steps. LCM support fast inference even <= 4 steps. Recommend: 1~8 steps.
num_inference_steps = 4
images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil").images
for save the image which i generate, i modify the sample code, just like this
from diffusers import DiffusionPipeline
import torch
from torchvision.transforms import ToPILImage
# 创建 DiffusionPipeline 实例
pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7")
# 设置管道的设备和数据类型
pipe.to(torch_device="cuda", torch_dtype=torch.float32)
# 输入的提示
prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
# 可以设置 1~50 步。LCM 支持快速推断,甚至可以 <= 4 步。推荐: 1~8 步。
num_inference_steps = 4
# 生成图像
images = pipe(prompt=prompt, num_inference_steps=num_inference_steps, guidance_scale=8.0, lcm_origin_steps=50, output_type="pil").images
# 保存生成的每个图像
for i, img_pil in enumerate(images):
# 保存图片
img_pil.save(f"generated_image_{i+1}.png")
print("Images saved successfully.")
it also works, and the image save successful, so, i think my python envs is OK.
I go to view huggingface demo, and try to find out any difference.
I noticed that line 38 and line 39
I saw @luosiallen comment line 39
pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7")
# pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7", custom_pipeline="latent_consistency_txt2img", custom_revision="main")
I think this is the key point, so I try to using
pipe = DiffusionPipeline.from_pretrained("SimianLuo/LCM_Dreamshaper_v7")
It works perfect.....MacOS、Windows、Linux all works.
I just wonder know.... is there something wrong in custom_pipline?
thanks, i forgot to update the pipeline. The previous custom_pipeline is deprecated.
I am very new to Stable diffusion so can someone help me? i have this issue where if I do like 1024x512 is ok but if I use hifix or a higher width or height them 1024 I get "TORCH_USE_CUDA_DSA" i dont know how to post the log without leaving a bad format , so i put in a txt in the end of my comment sorry for the trouble, i already read some discussions here and tried some fix like installing torch again. but nothing worked for me
Traceback (most recent call last):
File "C:\Users\Denis\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
Error completing request
Arguments: ('task(bs8ydtxhkdvyoag)', 'curvy Tear Grants\nnude, smiling, blushing, detailed eyes, chubby\nwith hairy armpit, large breasts, smelly pussy, smell, sweat, dripping wet\nTORCH_USE_CUDA_DSA
to enable device-side assertions.
self.run()
free, total = self.cuda_mem_get_info()
File "C:\Stable\stable-diffusion-webui\modules\memmon.py", line 34, in cuda_mem_get_info
return torch.cuda.mem_get_info(index)
File "C:\Stable\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 618, in mem_get_info
return torch.cuda.cudart().cudaMemGetInfo(device)
Traceback (most recent call last):
RuntimeError: CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA
to enable device-side assertions.
File "C:\Stable\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "C:\Stable\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "C:\Stable\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Stable\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Stable\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "C:\Stable\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, args)
File "C:\Stable\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(args, **kwargs)
File "C:\Stable\stable-diffusion-webui\modules\call_queue.py", line 77, in f
devices.torch_gc()
File "C:\Stable\stable-diffusion-webui\modules\devices.py", line 61, in torch_gc
torch.cuda.empty_cache()
File "C:\Stable\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 133, in empty_cache
torch._C._cuda_emptyCache()
RuntimeError: CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA
to enable device-side assertions.
Logcudaerror.txt
Running app.py locally (Windows). UI opens but when one of the sample prompts is clicked it errors out with this message
Any ideas on what needs to be done to fix this and get it working again?
To setup a local environment I use these packages/versions
Here is the
pip list
just in case that helps