Open GoZippy opened 1 year ago
It's working for me, try starting the executable with --clear_all
, that's how I fixed it.
img to img still not working in 584
I of course did the --clear_all
looks like there is something going on asking for keys from third party huggingface base model token... guess I need to look into why loading any model then uses a third party lookup...
anyhow it is looking like some auth issue causing complile to fail...
E:\ImageAI\shark> .\shark_sd_20230306_584.exe --clear_all
shark_tank local cache is located at C:\Users\me\.local/shark_tank/ . You may change this by setting the --local_tank_cache= flag
CLEARING ALL, EXPECT SEVERAL MINUTES TO RECOMPILE
vulkan devices are available.
cuda devices are not available.
Running on local URL: http://0.0.0.0:8080
To create a public link, set `share=True` in `launch()`.
Found device AMD Radeon RX 6700 XT. Using target triple rdna2-unknown-windows.
Using tuned models for runwayml/stable-diffusion-v1-5/fp16/vulkan://00000000-0300-0000-0000-000000000000.
torch\jit\_check.py:172: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in `__init__`. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in `torch.jit.Attribute`.
warnings.warn("The TorchScript type system doesn't support "
No vmfb found. Compiling and saving to E:\ImageAI\shark\euler_scale_model_input_1_512_512fp16.vmfb
Using target triple -iree-vulkan-target-triple=rdna2-unknown-windows from command line args
Saved vmfb in E:\ImageAI\shark\euler_scale_model_input_1_512_512fp16.vmfb.
WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files.
No vmfb found. Compiling and saving to E:\ImageAI\shark\euler_step_1_512_512fp16.vmfb
Using target triple -iree-vulkan-target-triple=rdna2-unknown-windows from command line args
Saved vmfb in E:\ImageAI\shark\euler_step_1_512_512fp16.vmfb.
WARNING: [Loader Message] Code 0 : windows_read_data_files_in_registry: Registry lookup failed to get layer manifest files.
Inferring base model configuration.
Retrying with a different base model configuration
Retrying with a different base model configuration
Retrying with a different base model configuration
Retrying with a different base model configuration
Traceback (most recent call last):
File "gradio\routes.py", line 384, in run_predict
File "gradio\blocks.py", line 1032, in process_api
File "gradio\blocks.py", line 844, in call_function
File "anyio\to_thread.py", line 31, in run_sync
File "anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
File "anyio\_backends\_asyncio.py", line 867, in run
File "apps\stable_diffusion\scripts\img2img.py", line 152, in img2img_inf
File "apps\stable_diffusion\src\pipelines\pipeline_shark_stable_diffusion_utils.py", line 351, in from_pretrained
File "apps\stable_diffusion\src\models\model_wrappers.py", line 541, in __call__
SystemExit: **Cannot compile the model**. Please create an issue with the detailed log at https://github.com/nod-ai/SHARK/issues
Img to Img not working with any model or base config.