another-ai / stable_cascade_easy

Text to Img with Stable Cascade(on gradio interface), required less vram than original example on official Hugginface
https://ko-fi.com/shiroppo
MIT License
40 stars 10 forks source link

1080ti error, won't make image #1

Open LockMan007 opened 9 months ago

LockMan007 commented 9 months ago

D:\ai\StableCascade-Easy>call .\venv\Scripts\activate Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). Loading pipeline components...: 0%| | 0/6 [00:00<?, ?it/s]D:\ai\StableCascade-Easy\venv\lib\site-packages\diffusers\models\lora.py:384: FutureWarning: LoRACompatibleLinear is deprecated and will be removed in version 1.0.0. Use of LoRACompatibleLinear is deprecated. Please switch to PEFT backend by installing PEFT: pip install peft. deprecate("LoRACompatibleLinear", "1.0.0", deprecation_message) Loading pipeline components...: 100%|██████████████████████████████████████████████████████████████████| 6/6 [00:01<00:00, 5.63it/s] 0%| | 0/20 [00:00<?, ?it/s] Traceback (most recent call last): File "D:\ai\StableCascade-Easy\venv\lib\site-packages\gradio\queueing.py", line 489, in call_prediction output = await route_utils.call_process_api( File "D:\ai\StableCascade-Easy\venv\lib\site-packages\gradio\route_utils.py", line 232, in call_process_api output = await app.get_blocks().process_api( File "D:\ai\StableCascade-Easy\venv\lib\site-packages\gradio\blocks.py", line 1561, in process_api result = await self.call_function( File "D:\ai\StableCascade-Easy\venv\lib\site-packages\gradio\blocks.py", line 1179, in call_function prediction = await anyio.to_thread.run_sync( File "D:\ai\StableCascade-Easy\venv\lib\site-packages\anyio\to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "D:\ai\StableCascade-Easy\venv\lib\site-packages\anyio_backends_asyncio.py", line 2134, in run_sync_in_worker_thread return await future File "D:\ai\StableCascade-Easy\venv\lib\site-packages\anyio_backends_asyncio.py", line 851, in run result = context.run(func, args) File "D:\ai\StableCascade-Easy\venv\lib\site-packages\gradio\utils.py", line 678, in wrapper response = f(args, kwargs) File "D:\ai\StableCascade-Easy\app.py", line 27, in image_print_create prior_output = prior( File "D:\ai\StableCascade-Easy\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, *kwargs) File "D:\ai\StableCascade-Easy\venv\lib\site-packages\diffusers\pipelines\stable_cascade\pipeline_stable_cascade_prior.py", line 579, in call predicted_image_embedding = self.prior( File "D:\ai\StableCascade-Easy\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "D:\ai\StableCascade-Easy\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "D:\ai\StableCascade-Easy\venv\lib\site-packages\diffusers\pipelines\stable_cascade\modeling_stable_cascade_common.py", line 311, in forward level_outputs = self._down_encode(x, r_embed, clip) File "D:\ai\StableCascade-Easy\venv\lib\site-packages\diffusers\pipelines\stable_cascade\modeling_stable_cascade_common.py", line 256, in _down_encode x = block(x, clip) File "D:\ai\StableCascade-Easy\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\ai\StableCascade-Easy\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "D:\ai\StableCascade-Easy\venv\lib\site-packages\diffusers\pipelines\wuerstchen\modeling_wuerstchen_common.py", line 108, in forward x = x + self.attention(norm_x, encoder_hidden_states=kv) File "D:\ai\StableCascade-Easy\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "D:\ai\StableCascade-Easy\venv\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(args, **kwargs) File "D:\ai\StableCascade-Easy\venv\lib\site-packages\diffusers\models\attention_processor.py", line 522, in forward return self.processor( File "D:\ai\StableCascade-Easy\venv\lib\site-packages\diffusers\models\attention_processor.py", line 1254, in call hidden_states = F.scaled_dot_product_attention( RuntimeError: cutlassF: no kernel found to launch!

another-ai commented 9 months ago

RuntimeError: cutlassF: no kernel found to launch!

I googled it and it says that torch==2.1.2+cu118 is incompatible with the 10xx series, you can try to install an earlier version of torch but I'm not sure if the rest of the libraries will work...

aaronnewsome commented 9 months ago

I get the same error on Tesla P40 GPU.

(.venv) root@stable-cascade-flash:~/stable_cascade_easy-main# python app.py 
Running on local URL:  http://0.0.0.0:7860

To create a public link, set `share=True` in `launch()`.
Loading pipeline components...:   0%|                                                                                                                                           | 0/6 [00:00<?, ?it/s]The config attributes {'c_in': 16} were passed to StableCascadeUnet, but are not expected and will be ignored. Please verify your config.json configuration file.
Loading pipeline components...: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:01<00:00,  3.12it/s]
  0%|                                                                                                                                                                          | 0/20 [00:00<?, ?it/s][W NNPACK.cpp:64] Could not initialize NNPACK! Reason: Unsupported hardware.
  0%|                                                                                                                                                                          | 0/20 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/gradio/queueing.py", line 489, in call_prediction
    output = await route_utils.call_process_api(
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/gradio/route_utils.py", line 232, in call_process_api
    output = await app.get_blocks().process_api(
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/gradio/blocks.py", line 1561, in process_api
    result = await self.call_function(
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/gradio/blocks.py", line 1179, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread
    return await future
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run
    result = context.run(func, *args)
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/gradio/utils.py", line 678, in wrapper
    response = f(*args, **kwargs)
  File "/root/stable_cascade_easy-main/app.py", line 37, in image_print_create
    prior_output = prior(
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/diffusers/pipelines/stable_cascade/pipeline_stable_cascade_prior.py", line 556, in __call__
    predicted_image_embedding = self.prior(
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/diffusers/pipelines/stable_cascade/modeling_stable_cascade_common.py", line 316, in forward
    level_outputs = self._down_encode(x, r_embed, clip)
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/diffusers/pipelines/stable_cascade/modeling_stable_cascade_common.py", line 255, in _down_encode
    x = block(x, clip)
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/diffusers/pipelines/wuerstchen/modeling_wuerstchen_common.py", line 108, in forward
    x = x + self.attention(norm_x, encoder_hidden_states=kv)
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
    return forward_call(*args, **kwargs)
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 522, in forward
    return self.processor(
  File "/root/stable_cascade_easy-main/.venv/lib/python3.10/site-packages/diffusers/models/attention_processor.py", line 1254, in __call__
    hidden_states = F.scaled_dot_product_attention(
RuntimeError: cutlassF: no kernel found to launch!
launch8484 commented 9 months ago

Add this 2 lines to app.py

torch.backends.cuda.enable_mem_efficient_sdp(False) torch.backends.cuda.enable_flash_sdp(False)