nod-ai / SHARK

SHARK - High Performance Machine Learning Distribution
Apache License 2.0
1.41k stars 168 forks source link

Cannot compile the model #1099

Open lomovoyPlayer opened 1 year ago

lomovoyPlayer commented 1 year ago

shark_tank local cache is located at C:\Users\User.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag vulkan devices are available. cuda devices are not available. Running on local URL: http://0.0.0.0:8080

To create a public link, set share=True in launch(). Found device AMD Radeon(TM) RX 6500 XT. Using target triple rdna2-unknown-windows. Using tuned models for stabilityai/stable-diffusion-2-1-base/fp16/vulkan://00000000-0300-0000-0000-000000000000. huggingface_hub\utils_hf_folder.py:92: UserWarning: A token has been found in C:\Users\User\.huggingface\token. This is the old path where tokens were stored. The new location is C:\Users\User\.cache\huggingface\token which is configurable using HF_HOME environment variable. Your token has been copied to this new location. You can now safely delete the old token file manually or use huggingface-cli logout. Downloading (…)cheduler_config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 346/346 [00:00<00:00, 347kB/s] huggingface_hub\file_download.py:129: UserWarning: huggingface_hub cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\User.cache\huggingface\diffusers. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the HF_HUB_DISABLE_SYMLINKS_WARNING environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations. To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development torch\jit_check.py:172: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in __init__. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in torch.jit.Attribute. warnings.warn("The TorchScript type system doesn't support " No vmfb found. Compiling and saving to D:\sd shark\euler_scale_model_input_1_512_512fp16.vmfb Using target triple -iree-vulkan-target-triple=rdna2-unknown-windows from command line args Saved vmfb in D:\sd shark\euler_scale_model_input_1_512_512fp16.vmfb. ERROR: [Loader Message] Code 0 : loader_get_json: Failed to open JSON file D:\Overwolf\0.181.0.11\ow-vulkan-overlay64.json ERROR: [Loader Message] Code 0 : loader_get_json: Failed to open JSON file D:\Overwolf\0.181.0.11\obs\data\obs-plugins\win-capture\ow-graphics-vulkan64.json No vmfb found. Compiling and saving to D:\sd shark\euler_step_1_512_512fp16.vmfb Using target triple -iree-vulkan-target-triple=rdna2-unknown-windows from command line args Saved vmfb in D:\sd shark\euler_step_1_512_512fp16.vmfb. ERROR: [Loader Message] Code 0 : loader_get_json: Failed to open JSON file D:\Overwolf\0.181.0.11\ow-vulkan-overlay64.json ERROR: [Loader Message] Code 0 : loader_get_json: Failed to open JSON file D:\Overwolf\0.181.0.11\obs\data\obs-plugins\win-capture\ow-graphics-vulkan64.json Inferring base model configuration. Downloading (…)_model.safetensors";: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 3.46G/3.46G [1:08:35<00:00, 842kB/s] Downloading (…)ain/unet/config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████| 911/911 [00:00<00:00, 914kB/s] safetensors\torch.py:98: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() torch_utils.py:776: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.get(instance, owner)() torch\storage.py:899: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() storage = cls(wrap_storage=untyped_storage) torch\fx\node.py:250: UserWarning: Trying to prepend a node to itself. This behavior has no effect on the graph. warnings.warn("Trying to prepend a node to itself. This behavior has no effect on the graph.") Loading Winograd config file from C:\Users\User.local/shark_tank/configs\unet_winograd_vulkan.json 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 107/107 [00:00<00:00, 461B/s] 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 107/107 [00:00<00:00, 1.79kB/s] Loading lowering config file from C:\Users\User.local/shark_tank/configs\unet_v2_1base_fp16_vulkan_rdna2.json 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 24.2k/24.2k [00:00<00:00, 116kB/s] 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 24.2k/24.2k [00:00<00:00, 320kB/s] Applying tuned configs on unet1_64_512_512_fp16_tuned_stable-diffusion-2-1-base_vulkan-00000000-0300-0000-0000-000000000000 Retrying with a different base model configuration Retrying with a different base model configuration Retrying with a different base model configuration Retrying with a different base model configuration Traceback (most recent call last): File "gradio\routes.py", line 384, in run_predict File "gradio\blocks.py", line 1024, in process_api File "gradio\blocks.py", line 836, in call_function File "anyio\to_thread.py", line 31, in run_sync File "anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread File "anyio_backends_asyncio.py", line 867, in run File "apps\stable_diffusion\scripts\txt2img.py", line 117, in txt2img_inf File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 232, in from_pretrained File "apps\stable_diffusion\src\models\model_wrappers.py", line 398, in call SystemExit: Cannot compile the model. Please create an issue with the detailed log at https://github.com/nod-ai/SHARK/issues

yzhang93 commented 1 year ago

Can you try --clear_all flag, and check if your local tank directory ( C:\Users\User.local/shark_tank/ or C:\Users.local/shark_tank/ ) is created? If not you can redirect the local tank by flag --local_tank_cache=.