Open not-mc-ride opened 1 year ago
Are you trying to use the webui or cli? just in case you have the wrong .exe
For the app you want the .exe that doesn't have cli
in the filename
F:\ai>shark_sd_20230423_700.exe shark_tank local cache is located at C:\Users\battl.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag vulkan devices are available. cuda devices are not available. diffusers\models\cross_attention.py:30: FutureWarning: Importing from cross_attention is deprecated. Please import from diffusers.models.attention_processor instead. Running on local URL: http://0.0.0.0:8080
To create a public link, set share=True
in launch()
.
Found device Radeon RX 580 Series. Using target triple rdna2-unknown-windows.
Using tuned models for stabilityai/stable-diffusion-2-1/fp16/vulkan://00000000-0100-0000-0000-000000000000.
Downloading (…)cheduler_config.json: 100%|████████████████████████████████████████████████████| 345/345 [00:00<?, ?B/s]
huggingface_hub\file_download.py:133: UserWarning: huggingface_hub
cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\battl.cache\huggingface\hub. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the HF_HUB_DISABLE_SYMLINKS_WARNING
environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.
To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
loading existing vmfb from: F:\ai\euler_scale_model_input_1_512_512_vulkan_fp16.vmfb
loading existing vmfb from: F:\ai\euler_step_1_512_512_vulkan_fp16.vmfb
use_tuned? sharkify: True
_1_64_512_512_fp16_tuned_stable-diffusion-2-1-base
Downloading (…)tokenizer/vocab.json: 100%|████████████████████████████████████████| 1.06M/1.06M [00:00<00:00, 3.06MB/s]
Downloading (…)tokenizer/merges.txt: 100%|██████████████████████████████████████████| 525k/525k [00:00<00:00, 3.57MB/s]
Downloading (…)cial_tokens_map.json: 100%|████████████████████████████████████████████| 460/460 [00:00<00:00, 38.6kB/s]
Downloading (…)okenizer_config.json: 100%|█████████████████████████████████████████████| 824/824 [00:00<00:00, 732kB/s]
transformers\modeling_utils.py:429: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
with safe_open(checkpoint_file, framework="pt") as f:
Traceback (most recent call last):
File "gradio\routes.py", line 401, in run_predict
File "gradio\blocks.py", line 1302, in process_api
File "gradio\blocks.py", line 1039, in call_function
File "anyio\to_thread.py", line 31, in run_sync
File "anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
File "anyio_backends_asyncio.py", line 867, in run
File "gradio\utils.py", line 491, in async_iteration
File "ui\txt2img_ui.py", line 177, in txt2img_inf
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_txt2img.py", line 114, in generate_images
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 376, in encode_prompts_weight
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 86, in load_clip
File "apps\stable_diffusion\src\models\model_wrappers.py", line 601, in clip
File "apps\stable_diffusion\src\models\model_wrappers.py", line 594, in clip
File "apps\stable_diffusion\src\models\model_wrappers.py", line 531, in get_clip
File "apps\stable_diffusion\src\utils\utils.py", line 120, in compile_through_fx
File "apps\stable_diffusion\src\utils\utils.py", line 47, in _load_vmfb
File "shark\shark_inference.py", line 207, in load_module
File "shark\iree_utils\compile_utils.py", line 331, in load_flatbuffer
SystemExit: [Errno 13] Permission denied
Found device Radeon RX 580 Series. Using target triple rdna2-unknown-windows.
Using tuned models for CompVis/stable-diffusion-v1-4/fp16/vulkan://00000000-0100-0000-0000-000000000000.
Downloading (…)cheduler_config.json: 100%|████████████████████████████████████████████████████| 313/313 [00:00<?, ?B/s]
loading existing vmfb from: F:\ai\euler_scale_model_input_1_512_512_vulkan_fp16.vmfb
loading existing vmfb from: F:\ai\euler_step_1_512_512_vulkan_fp16.vmfb
use_tuned? sharkify: True
_1_64_512_512_fp16_tuned_stable-diffusion-v1-4
Downloading (…)tokenizer/vocab.json: 100%|████████████████████████████████████████| 1.06M/1.06M [00:00<00:00, 3.36MB/s]
Downloading (…)tokenizer/merges.txt: 100%|██████████████████████████████████████████| 525k/525k [00:00<00:00, 3.18MB/s]
Downloading (…)cial_tokens_map.json: 100%|█████████████████████████████████████████████| 472/472 [00:00<00:00, 157kB/s]
Downloading (…)okenizer_config.json: 100%|█████████████████████████████████████████████| 806/806 [00:00<00:00, 403kB/s]
Downloading (…)_encoder/config.json: 100%|█████████████████████████████████████████████| 592/592 [00:00<00:00, 296kB/s]
Downloading model.safetensors: 100%|████████████████████████████████████████████████| 492M/492M [02:56<00:00, 2.79MB/s]
No vmfb found. Compiling and saving to F:\ai\clip_1_64_512_512_fp16_tuned_stable-diffusion-v1-4_vulkan.vmfb
Using target triple -iree-vulkan-target-triple=rdna2-unknown-windows from command line args
Saved vmfb in F:\ai\clip_1_64_512_512_fp16_tuned_stable-diffusion-v1-4_vulkan.vmfb.
Downloading (…)ain/unet/config.json: 100%|████████████████████████████████████████████████████| 743/743 [00:00<?, ?B/s]
Downloading (…)ch_model.safetensors: 100%|████████████████████████████████████████| 3.44G/3.44G [16:02<00:00, 3.57MB/s]
mat1 and mat2 shapes cannot be multiplied (128x1024 and 768x320)
Retrying with a different base model configuration
[enforce fail at ..\c10\core\impl\alloc_cpu.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 1073741824 bytes.
Retrying with a different base model configuration
Retrying with a different base model configuration
Retrying with a different base model configuration
Retrying with a different base model configuration Traceback (most recent call last): File "gradio\routes.py", line 401, in run_predict File "gradio\blocks.py", line 1302, in process_api File "gradio\blocks.py", line 1039, in call_function File "anyio\to_thread.py", line 31, in run_sync File "anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread File "anyio_backends_asyncio.py", line 867, in run File "gradio\utils.py", line 491, in async_iteration File "ui\txt2img_ui.py", line 177, in txt2img_inf File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_txt2img.py", line 122, in generate_images File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 203, in produce_img_latents File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 103, in load_unet File "apps\stable_diffusion\src\models\model_wrappers.py", line 640, in unet File "apps\stable_diffusion\src\models\model_wrappers.py", line 635, in unet File "apps\stable_diffusion\src\models\model_wrappers.py", line 59, in check_compilation SystemExit: Could not compile Unet. Please create an issue with the detailed log at https://github.com/nod-ai/SHARK/issues
i got this error: F:\ai>shark_sd_cli_20230423_700.exe shark_tank local cache is located at C:\Users\battl.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag diffusers\models\cross_attention.py:30: FutureWarning: Importing from cross_attention is deprecated. Please import from diffusers.models.attention_processor instead. Found device Radeon RX 580 Series. Using target triple rdna2-unknown-windows. Using tuned models for stabilityai/stable-diffusion-2-1-base/fp16/vulkan://00000000-0100-0000-0000-000000000000. Downloading (…)cheduler_config.json: 100%|█████████████████████████████████████████████| 346/346 [00:00<00:00, 178kB/s] huggingface_hub\file_download.py:133: UserWarning:
huggingface_hub
cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\battl.cache\huggingface\hub. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting theHF_HUB_DISABLE_SYMLINKS_WARNING
environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations. To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development No vmfb found. Compiling and saving to F:\ai\euler_scale_model_input_1_512_512_vulkan_fp16.vmfb Using target triple -iree-vulkan-target-triple=rdna2-unknown-windows from command line args Saved vmfb in F:\ai\euler_scale_model_input_1_512_512_vulkan_fp16.vmfb. No vmfb found. Compiling and saving to F:\ai\euler_step_1_512_512_vulkan_fp16.vmfb Using target triple -iree-vulkan-target-triple=rdna2-unknown-windows from command line args Saved vmfb in F:\ai\euler_step_1_512_512_vulkan_fp16.vmfb. use_tuned? sharkify: True _1_64_512_512_fp16_tuned_stable-diffusion-2-1-base Downloading (…)tokenizer/vocab.json: 100%|████████████████████████████████████████| 1.06M/1.06M [00:00<00:00, 11.0MB/s] Downloading (…)tokenizer/merges.txt: 100%|██████████████████████████████████████████| 525k/525k [00:00<00:00, 7.13MB/s] Downloading (…)cial_tokens_map.json: 100%|█████████████████████████████████████████████| 460/460 [00:00<00:00, 151kB/s] Downloading (…)okenizer_config.json: 100%|█████████████████████████████████████████████| 807/807 [00:00<00:00, 240kB/s] Downloading (…)_encoder/config.json: 100%|█████████████████████████████████████████████| 613/613 [00:00<00:00, 203kB/s] Downloading model.safetensors: 100%|██████████████████████████████████████████████| 1.36G/1.36G [00:47<00:00, 28.9MB/s] transformers\modeling_utils.py:429: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() with safe_open(checkpoint_file, framework="pt") as f: No vmfb found. Compiling and saving to F:\ai\clip_1_64_512_512_fp16_tuned_stable-diffusion-2-1-base_vulkan.vmfb Using target triple -iree-vulkan-target-triple=rdna2-unknown-windows from command line args Saved vmfb in F:\ai\clip_1_64_512_512_fp16_tuned_stable-diffusion-2-1-base_vulkan.vmfb. Downloading (…)ain/unet/config.json: 100%|█████████████████████████████████████████████| 911/911 [00:00<00:00, 130kB/s] Downloading (…)ch_model.safetensors: 38%|███████████████ | 1.30G/3.46G [00:16<00:26, 80.8MB/s] Downloading (…)on_pytorch_model.bin: 0%| | 10.5M/3.46G [00:00<01:07, 51.4MB/s]Can't load the model for 'stabilityai/stable-diffusion-2-1-base'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'stabilityai/stable-diffusion-2-1-base' is the correct path to a directory containing a file named diffusion_pytorch_model.bin Retrying with a different base model configuration Downloading (…)on_pytorch_model.bin: 0%| | 10.5M/3.46G [00:00<01:10, 49.1MB/s] Downloading (…)ch_model.safetensors: 0%| | 10.5M/3.46G [00:00<01:05, 52.6MB/s] Downloading (…)on_pytorch_model.bin: 0%| | 10.5M/3.46G [00:00<01:08, 50.7MB/s]Can't load the model for 'stabilityai/stable-diffusion-2-1-base'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'stabilityai/stable-diffusion-2-1-base' is the correct path to a directory containing a file named diffusion_pytorch_model.bin Retrying with a different base model configuration Downloading (…)on_pytorch_model.bin: 0%| | 10.5M/3.46G [00:00<01:10, 49.0MB/s] Downloading (…)ch_model.safetensors: 0%| | 10.5M/3.46G [00:00<01:06, 52.1MB/s] Downloading (…)on_pytorch_model.bin: 0%| | 10.5M/3.46G [00:00<01:04, 53.3MB/s]Can't load the model for 'stabilityai/stable-diffusion-2-1-base'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'stabilityai/stable-diffusion-2-1-base' is the correct path to a directory containing a file named diffusion_pytorch_model.bin Retrying with a different base model configuration Downloading (…)on_pytorch_model.bin: 0%| | 10.5M/3.46G [00:00<01:07, 51.4MB/s] Downloading (…)ch_model.safetensors: 0%| | 10.5M/3.46G [00:00<01:10, 49.3MB/s] Downloading (…)on_pytorch_model.bin: 0%| | 10.5M/3.46G [00:00<01:07, 51.4MB/s]Can't load the model for 'stabilityai/stable-diffusion-2-1-base'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'stabilityai/stable-diffusion-2-1-base' is the correct path to a directory containing a file named diffusion_pytorch_model.bin Retrying with a different base model configuration Downloading (…)on_pytorch_model.bin: 0%| | 10.5M/3.46G [00:00<01:09, 49.8MB/s] Downloading (…)ch_model.safetensors: 0%| | 10.5M/3.46G [00:00<01:05, 53.1MB/s] Downloading (…)on_pytorch_model.bin: 0%| | 10.5M/3.46G [00:00<01:05, 52.9MB/s]Can't load the model for 'stabilityai/stable-diffusion-2-1-base'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'stabilityai/stable-diffusion-2-1-base' is the correct path to a directory containing a file named diffusion_pytorch_model.bin Retrying with a different base model configuration Downloading (…)on_pytorch_model.bin: 0%| | 10.5M/3.46G [00:00<01:07, 51.4MB/s] Could not compile Unet. Please create an issue with the detailed log at https://github.com/nod-ai/SHARK/issuesand i have no clue how to fix it