Open stonedDiscord opened 1 year ago
Thank you for the report. We tried to roll out OTF (on the fly) tuning for custom models for 7900xtx but had to revert because of this issue. We should reland soon.
I tried to get andite/anything-v4.0 with my mba 7900 xtx, ah sad.
shark_tank local cache is located atX.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
vulkan devices are available.
cuda devices are not available.
Running on local URL: http://0.0.0.0:8080
To create a public link, set `share=True` in `launch()`.
Found device AMD Radeon RX 7900 XTX. Using target triple rdna3-7900-windows.
Using tuned models for andite/anything-v4.0/fp16/vulkan://00000000-0300-0000-0000-000000000000.
torch\jit\_check.py:172: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in `__init__`. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in `torch.jit.Attribute`.
warnings.warn("The TorchScript type system doesn't support "
loading existing vmfb from:XDownloads\Neuer Ordner\euler_scale_model_input_1_512_512fp16.vmfb
WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files.
loading existing vmfb from:XDownloads\Neuer Ordner\euler_step_1_512_512fp16.vmfb
WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files.
Inferring base model configuration.
Cannot initialize model with low cpu memory usage because `accelerate` was not found in the environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install `accelerate` for faster and less memory-intense model loading. You can do so with:
pip install accelerate
.
Retrying with a different base model configuration
Cannot initialize model with low cpu memory usage because `accelerate` was not found in the environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install `accelerate` for faster and less memory-intense model loading. You can do so with:
pip install accelerate
.
torch\fx\node.py:250: UserWarning: Trying to prepend a node to itself. This behavior has no effect on the graph.
warnings.warn("Trying to prepend a node to itself. This behavior has no effect on the graph.")
Loading Winograd config file from X.local/shark_tank/configs/unet_winograd_vulkan.json
100%|███████████████████████████████████████████████████████████████████████████████████| 107/107 [00:00<00:00, 936B/s]
100%|███████████████████████████████████████████████████████████████████████████████████| 107/107 [00:00<00:00, 744B/s]
Retrying with a different base model configuration
Cannot initialize model with low cpu memory usage because `accelerate` was not found in the environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install `accelerate` for faster and less memory-intense model loading. You can do so with:
pip install accelerate
.
Retrying with a different base model configuration
Cannot initialize model with low cpu memory usage because `accelerate` was not found in the environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install `accelerate` for faster and less memory-intense model loading. You can do so with:
pip install accelerate
.
Retrying with a different base model configuration
Traceback (most recent call last):
File "gradio\routes.py", line 374, in run_predict
File "gradio\blocks.py", line 1017, in process_api
File "gradio\blocks.py", line 835, in call_function
File "anyio\to_thread.py", line 31, in run_sync
File "anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
File "anyio\_backends\_asyncio.py", line 867, in run
File "apps\stable_diffusion\scripts\txt2img.py", line 116, in txt2img_inf
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 220, in from_pretrained
File "apps\stable_diffusion\src\models\model_wrappers.py", line 348, in __call__
SystemExit: Cannot compile the model. Please create an issue with the detailed log at https://github.com/nod-ai/SHARK/issues
apparently it is a troll model please use the real anythingv3.
apparently it is a troll model please use the real anythingv3.
rly? i used OnnxDiffusersUI before and i was getting better results. Maybe it was because of the new built in KDPM2 scheduler. I am currently testing node-ai, and i am havin my difficults, since i have the feeling my prompts are cut off or not so much respected like in onxdiffusersui. (Or maybe i use to many prompts? Atleast my last prompts where they are most times not getting used, even i puted them in () )
I just tried to use nitrosocke/Nitro-Diffusion and hakurei/waifu-diffusion. I'm using the current version (last commit was 962470f61046467c351a6cb65ab1a79fbfbd2ff2) and I also still get error messages:
(shark.venv) PS C:\Users\marvi\SHARK\apps\stable_diffusion\web> python .\index.py
shark_tank local cache is located at C:\Users\marvi\.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
vulkan devices are available.
cuda devices are not available.
Running on local URL: http://0.0.0.0:8080
To create a public link, set `share=True` in `launch()`.
Found device AMD Radeon RX 7900 XTX. Using target triple rdna3-7900-windows.
Using tuned models for hakurei/waifu-diffusion/fp16/vulkan://00000000-0d00-0000-0000-000000000000.
C:\Users\marvi\SHARK\shark.venv\Lib\site-packages\torch\jit\_check.py:172: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in `__init__`. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in `torch.jit.Attribute`.
warnings.warn("The TorchScript type system doesn't support "
No vmfb found. Compiling and saving to C:\Users\marvi\SHARK\apps\stable_diffusion\web\euler_scale_model_input_1_512_512fp16.vmfb
Using target triple -iree-vulkan-target-triple=rdna3-7900-windows from command line args
Saved vmfb in C:\Users\marvi\SHARK\apps\stable_diffusion\web\euler_scale_model_input_1_512_512fp16.vmfb.
WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer does not conform to naming standard (Policy #LLP_LAYER_3)
WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer_VERBOSE does not conform to naming standard (Policy #LLP_LAYER_3)
WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer_DEBUG does not conform to naming standard (Policy #LLP_LAYER_3)
No vmfb found. Compiling and saving to C:\Users\marvi\SHARK\apps\stable_diffusion\web\euler_step_1_512_512fp16.vmfb
Using target triple -iree-vulkan-target-triple=rdna3-7900-windows from command line args
Saved vmfb in C:\Users\marvi\SHARK\apps\stable_diffusion\web\euler_step_1_512_512fp16.vmfb.
WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer does not conform to naming standard (Policy #LLP_LAYER_3)
WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer_VERBOSE does not conform to naming standard (Policy #LLP_LAYER_3)
WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer_DEBUG does not conform to naming standard (Policy #LLP_LAYER_3)
Inferring base model configuration.
Cannot initialize model with low cpu memory usage because `accelerate` was not found in the environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install `accelerate` for faster and less memory-intense model loading. You can do so with:
pip install accelerate
.
C:\Users\marvi\SHARK\shark.venv\Lib\site-packages\torch\fx\node.py:250: UserWarning: Trying to prepend a node to itself. This behavior has no effect on the graph.
warnings.warn("Trying to prepend a node to itself. This behavior has no effect on the graph.")
Loading Winograd config file from C:\Users\marvi\.local/shark_tank/configs/unet_winograd_vulkan.json
100%|█████████████████████████████████████████████████████████| 107/107 [00:00<00:00, 842B/s]
100%|███████████████████████████████████████████████████████| 107/107 [00:00<00:00, 8.91kB/s]
Retrying with a different base model configuration
Cannot initialize model with low cpu memory usage because `accelerate` was not found in the environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install `accelerate` for faster and less memory-intense model loading. You can do so with:
pip install accelerate
.
Retrying with a different base model configuration
Cannot initialize model with low cpu memory usage because `accelerate` was not found in the environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install `accelerate` for faster and less memory-intense model loading. You can do so with:
pip install accelerate
.
Retrying with a different base model configuration
Cannot initialize model with low cpu memory usage because `accelerate` was not found in the environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install `accelerate` for faster and less memory-intense model loading. You can do so with:
pip install accelerate
.
Retrying with a different base model configuration
Traceback (most recent call last):
File "C:\Users\marvi\SHARK\shark.venv\Lib\site-packages\gradio\routes.py", line 374, in run_predict
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\marvi\SHARK\shark.venv\Lib\site-packages\gradio\blocks.py", line 1017, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\marvi\SHARK\shark.venv\Lib\site-packages\gradio\blocks.py", line 835, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\marvi\SHARK\shark.venv\Lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\marvi\SHARK\shark.venv\Lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "C:\Users\marvi\SHARK\shark.venv\Lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\marvi\SHARK\apps\stable_diffusion\scripts\txt2img.py", line 116, in txt2img_inf
txt2img_obj = Text2ImagePipeline.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\marvi\SHARK\apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 223, in from_pretrained
clip, unet, vae = mlir_import()
^^^^^^^^^^^^^
File "C:\Users\marvi\SHARK\apps\stable_diffusion\src\models\model_wrappers.py", line 383, in __call__
sys.exit(
SystemExit: Cannot compile the model. Please create an issue with the detailed log at https://github.com/nod-ai/SHARK/issues
Found device AMD Radeon RX 7900 XTX. Using target triple rdna3-7900-windows.
Using tuned models for nitrosocke/Nitro-Diffusion/fp16/vulkan://00000000-0d00-0000-0000-000000000000.
loading existing vmfb from: C:\Users\marvi\SHARK\apps\stable_diffusion\web\euler_scale_model_input_1_512_512fp16.vmfb
WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer does not conform to naming standard (Policy #LLP_LAYER_3)
WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer_VERBOSE does not conform to naming standard (Policy #LLP_LAYER_3)
WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer_DEBUG does not conform to naming standard (Policy #LLP_LAYER_3)
loading existing vmfb from: C:\Users\marvi\SHARK\apps\stable_diffusion\web\euler_step_1_512_512fp16.vmfb
WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer does not conform to naming standard (Policy #LLP_LAYER_3)
WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer_VERBOSE does not conform to naming standard (Policy #LLP_LAYER_3)
WARNING: [Loader Message] Code 0 : Layer name GalaxyOverlayVkLayer_DEBUG does not conform to naming standard (Policy #LLP_LAYER_3)
Inferring base model configuration.
Cannot initialize model with low cpu memory usage because `accelerate` was not found in the environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install `accelerate` for faster and less memory-intense model loading. You can do so with:
pip install accelerate
.
Retrying with a different base model configuration
Cannot initialize model with low cpu memory usage because `accelerate` was not found in the environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install `accelerate` for faster and less memory-intense model loading. You can do so with:
pip install accelerate
.
Loading Winograd config file from C:\Users\marvi\.local/shark_tank/configs/unet_winograd_vulkan.json
100%|███████████████████████████████████████████████████████| 107/107 [00:00<00:00, 9.73kB/s]
100%|███████████████████████████████████████████████████████| 107/107 [00:00<00:00, 9.72kB/s]
Retrying with a different base model configuration
Cannot initialize model with low cpu memory usage because `accelerate` was not found in the environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install `accelerate` for faster and less memory-intense model loading. You can do so with:
pip install accelerate
.
Retrying with a different base model configuration
Cannot initialize model with low cpu memory usage because `accelerate` was not found in the environment. Defaulting to `low_cpu_mem_usage=False`. It is strongly recommended to install `accelerate` for faster and less memory-intense model loading. You can do so with:
pip install accelerate
.
Retrying with a different base model configuration
Traceback (most recent call last):
File "C:\Users\marvi\SHARK\shark.venv\Lib\site-packages\gradio\routes.py", line 374, in run_predict
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\marvi\SHARK\shark.venv\Lib\site-packages\gradio\blocks.py", line 1017, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\marvi\SHARK\shark.venv\Lib\site-packages\gradio\blocks.py", line 835, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\marvi\SHARK\shark.venv\Lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\marvi\SHARK\shark.venv\Lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "C:\Users\marvi\SHARK\shark.venv\Lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\marvi\SHARK\apps\stable_diffusion\scripts\txt2img.py", line 116, in txt2img_inf
txt2img_obj = Text2ImagePipeline.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\marvi\SHARK\apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 223, in from_pretrained
clip, unet, vae = mlir_import()
^^^^^^^^^^^^^
File "C:\Users\marvi\SHARK\apps\stable_diffusion\src\models\model_wrappers.py", line 383, in __call__
sys.exit(
SystemExit: Cannot compile the model. Please create an issue with the detailed log at https://github.com/nod-ai/SHARK/issues
RX7900XTX on 23.1.2
I tried with Waifu Diffusion this time, i get the same error with OrangeMix Abyss2