(shark.venv) PS E:\shark\apps\stable_diffusion\web> python .\index.py
shark_tank local cache is located at C:\Users\Studio.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
vulkan devices are available.
cuda devices are not available.
Running on local URL: http://0.0.0.0:8080
To create a public link, set share=True in launch().
Found device AMD Radeon RX 6600 XT. Using target triple rdna2-unknown-windows.
Using tuned models for CompVis/stable-diffusion-v1-4/fp16/vulkan://00000000-0400-0000-0000-000000000000.
E:\shark\shark.venv\Lib\site-packages\torch\jit_check.py:172: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in __init__. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in torch.jit.Attribute.
warnings.warn("The TorchScript type system doesn't support "
loading existing vmfb from: E:\shark\apps\stable_diffusion\web\euler_scale_model_input_1_512_512fp16.vmfb
WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files.
loading existing vmfb from: E:\shark\apps\stable_diffusion\web\euler_step_1_512_512fp16.vmfb
WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files.
WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files.
WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files.
WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files.
Loaded vmfbs from cache and successfully fetched base model configuration.
0it [00:00, ?it/s]
Traceback (most recent call last):
File "E:\shark\shark.venv\Lib\site-packages\gradio\routes.py", line 384, in run_predict
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\shark\shark.venv\Lib\site-packages\gradio\blocks.py", line 1032, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\shark\shark.venv\Lib\site-packages\gradio\blocks.py", line 858, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\shark\shark.venv\Lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\shark\shark.venv\Lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "E:\shark\shark.venv\Lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\shark\shark.venv\Lib\site-packages\gradio\utils.py", line 448, in async_iteration
return next(iterator)
^^^^^^^^^^^^^^
File "E:\SHARK\apps\stable_diffusion\scripts\txt2img.py", line 148, in txt2img_inf
out_imgs = txt2img_obj.generate_images(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\SHARK\apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_txt2img.py", line 123, in generate_images
latents = self.produce_img_latents(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\SHARK\apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 259, in produce_img_latents
noise_pred = self.unet(
^^^^^^^^^^
File "E:\SHARK\shark\shark_inference.py", line 138, in call
return self.shark_runner.run(function_name, inputs, send_to_host)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\SHARK\shark\shark_runner.py", line 93, in run
return get_results(
^^^^^^^^^^^^
File "E:\SHARK\shark\iree_utils\compile_utils.py", line 381, in get_results
result = compiled_vm[function_name](device_inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\shark\shark.venv\Lib\site-packages\iree\runtime\function.py", line 130, in call
self._invoke(arg_list, ret_list)
File "E:\shark\shark.venv\Lib\site-packages\iree\runtime\function.py", line 154, in _invoke
self._vm_context.invoke(self._vm_function, arg_list, ret_list)
RuntimeError: Error invoking function: :0: OK; failed to wait on timepoint;
[ 0] bytecode module.forward:134518 [
(shark.venv) PS E:\shark\apps\stable_diffusion\web> python .\index.py shark_tank local cache is located at C:\Users\Studio.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag vulkan devices are available. cuda devices are not available. Running on local URL: http://0.0.0.0:8080
To create a public link, set:0: OK; failed to wait on timepoint;
[ 0] bytecode module.forward:134518 [
share=True
inlaunch()
. Found device AMD Radeon RX 6600 XT. Using target triple rdna2-unknown-windows. Using tuned models for CompVis/stable-diffusion-v1-4/fp16/vulkan://00000000-0400-0000-0000-000000000000. E:\shark\shark.venv\Lib\site-packages\torch\jit_check.py:172: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in__init__
. Instead, either 1) use a type annotation in the class body, or 2) wrap the type intorch.jit.Attribute
. warnings.warn("The TorchScript type system doesn't support " loading existing vmfb from: E:\shark\apps\stable_diffusion\web\euler_scale_model_input_1_512_512fp16.vmfb WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files. loading existing vmfb from: E:\shark\apps\stable_diffusion\web\euler_step_1_512_512fp16.vmfb WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files. WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files. WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files. WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files. Loaded vmfbs from cache and successfully fetched base model configuration. 0it [00:00, ?it/s] Traceback (most recent call last): File "E:\shark\shark.venv\Lib\site-packages\gradio\routes.py", line 384, in run_predict output = await app.get_blocks().process_api( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\shark\shark.venv\Lib\site-packages\gradio\blocks.py", line 1032, in process_api result = await self.call_function( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\shark\shark.venv\Lib\site-packages\gradio\blocks.py", line 858, in call_function prediction = await anyio.to_thread.run_sync( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\shark\shark.venv\Lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\shark\shark.venv\Lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future ^^^^^^^^^^^^ File "E:\shark\shark.venv\Lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, args) ^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\shark\shark.venv\Lib\site-packages\gradio\utils.py", line 448, in async_iteration return next(iterator) ^^^^^^^^^^^^^^ File "E:\SHARK\apps\stable_diffusion\scripts\txt2img.py", line 148, in txt2img_inf out_imgs = txt2img_obj.generate_images( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\SHARK\apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_txt2img.py", line 123, in generate_images latents = self.produce_img_latents( ^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\SHARK\apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 259, in produce_img_latents noise_pred = self.unet( ^^^^^^^^^^ File "E:\SHARK\shark\shark_inference.py", line 138, in call return self.shark_runner.run(function_name, inputs, send_to_host) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\SHARK\shark\shark_runner.py", line 93, in run return get_results( ^^^^^^^^^^^^ File "E:\SHARK\shark\iree_utils\compile_utils.py", line 381, in get_results result = compiled_vm[function_name](device_inputs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\shark\shark.venv\Lib\site-packages\iree\runtime\function.py", line 130, in call self._invoke(arg_list, ret_list) File "E:\shark\shark.venv\Lib\site-packages\iree\runtime\function.py", line 154, in _invoke self._vm_context.invoke(self._vm_function, arg_list, ret_list) RuntimeError: Error invoking function: