Closed smrtknow closed 4 months ago
Could you share the error shown in the terminal?
Could you share the error shown in the terminal?
[2024-01-16 20:04:12,575]::[InvokeAI]::ERROR --> Traceback (most recent call last): File "C:\Users\X\invokeai.venv\Lib\site-packages\invokeai\app\services\invocation_processor\invocation_processor_default.py", line 104, in __process outputs = invocation.invoke_internal( ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\X\invokeai.venv\Lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 669, in invoke_internal output = self.invoke(context) ^^^^^^^^^^^^^^^^^^^^ File "C:\Users\X\invokeai.venv\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\X\invokeai.venv\Lib\site-packages\invokeai\app\invocations\latent.py", line 754, in invoke ip_adapter_data = self.prep_ip_adapter_data( ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\X\invokeai.venv\Lib\site-packages\invokeai\app\invocations\latent.py", line 500, in prep_ip_adapter_data image_encoder_model_info = context.services.model_manager.get_model( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\X\invokeai.venv\Lib\site-packages\invokeai\app\services\model_manager\model_manager_default.py", line 112, in get_model model_info = self.mgr.get_model( ^^^^^^^^^^^^^^^^^^^ File "C:\Users\X\invokeai.venv\Lib\site-packages\invokeai\backend\model_management\model_manager.py", line 497, in get_model model_context = self.cache.get_model( ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\X\invokeai.venv\Lib\site-packages\invokeai\backend\model_management\model_cache.py", line 241, in get_model model = model_info.get_model(child_type=submodel, torch_dtype=self.precision) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\X\invokeai.venv\Lib\site-packages\invokeai\backend\model_management\models\clip_vision.py", line 63, in get_model model = CLIPVisionModelWithProjection.from_pretrained(self.model_path, torch_dtype=torch_dtype) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\X\invokeai.venv\Lib\site-packages\transformers\modeling_utils.py", line 3371, in from_pretrained with safe_open(resolved_archive_file, framework="pt") as f: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
[2024-01-16 20:04:12,578]::[InvokeAI]::ERROR --> Error while invoking: Error while deserializing header: MetadataIncompleteBuffer
Try installing the IP-Adapter from the model install option in the launcher script or from https://models.invoke.ai - those will have the right format for you
Got error when try to install InvokeAI/ip_adapter_sdxl
2024-03-24 00:37:08 [2024-03-23 16:37:08,453]::[ModelInstallService]::INFO --> Model install started: InvokeAI/ip_adapter_sdxl
2024-03-24 00:37:08 Exception in thread Thread-1 (_install_next_item):
2024-03-24 00:37:08 Traceback (most recent call last):
2024-03-24 00:37:08 File "/usr/lib/python3.11/threading.py", line 1038, in _bootstrap_inner
2024-03-24 00:37:08 self.run()
2024-03-24 00:37:08 File "/usr/lib/python3.11/threading.py", line 975, in run
2024-03-24 00:37:08 self._target(*self._args, **self._kwargs)
2024-03-24 00:37:08 File "/opt/invokeai/invokeai/app/services/model_install/model_install_default.py", line 458, in _install_next_item
2024-03-24 00:37:08 self._register_or_install(job)
2024-03-24 00:37:08 File "/opt/invokeai/invokeai/app/services/model_install/model_install_default.py", line 488, in _register_or_install
2024-03-24 00:37:08 key = self.install_path(job.local_path, job.config_in)
2024-03-24 00:37:08 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-03-24 00:37:08 File "/opt/invokeai/invokeai/app/services/model_install/model_install_default.py", line 186, in install_path
2024-03-24 00:37:08 info: AnyModelConfig = ModelProbe.probe(Path(model_path), config, hash_algo=self._app_config.hashing_algorithm)
2024-03-24 00:37:08 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-03-24 00:37:08 File "/opt/invokeai/invokeai/backend/model_manager/probe.py", line 154, in probe
2024-03-24 00:37:08 fields["base"] = fields.get("base") or probe.get_base_type()
2024-03-24 00:37:08 ^^^^^^^^^^^^^^^^^^^^^
2024-03-24 00:37:08 File "/opt/invokeai/invokeai/backend/model_manager/probe.py", line 724, in get_base_type
2024-03-24 00:37:08 state_dict = torch.load(model_file, map_location="cpu")
2024-03-24 00:37:08 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-03-24 00:37:08 File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/serialization.py", line 1005, in load
2024-03-24 00:37:08 with _open_zipfile_reader(opened_file) as opened_zipfile:
2024-03-24 00:37:08 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-03-24 00:37:08 File "/opt/venv/invokeai/lib/python3.11/site-packages/torch/serialization.py", line 457, in __init__
2024-03-24 00:37:08 super().__init__(torch._C.PyTorchFileReader(name_or_buffer))
2024-03-24 00:37:08 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2024-03-24 00:37:08 RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
try to install InvokeAI/ip_adapter_sd15 (ENV: v4.0.0-rc5, win10, intel, docker compose)
logs show completed, but UI always in inprocess...
And i can't see ip_adaptein Model Manager after filter...
2024-03-24 00:43:33 [2024-03-23 16:43:33,787]::[InvokeAI]::INFO --> Started installation of InvokeAI/ip_adapter_sd15
2024-03-24 00:43:33 [2024-03-23 16:43:33,789]::[uvicorn.access]::INFO --> 172.18.0.1:52228 - "POST /api/v2/models/install?source=InvokeAI%2Fip_adapter_sd15&inplace=true HTTP/1.1" 201
2024-03-24 00:43:33 [2024-03-23 16:43:33,794]::[uvicorn.access]::INFO --> 172.18.0.1:52228 - "GET /api/v2/models/install HTTP/1.1" 200
2024-03-24 00:43:34 [2024-03-23 16:43:34,318]::[ModelInstallService]::INFO --> Model download started: https://huggingface.co/InvokeAI/ip_adapter_sd15/resolve/main/image_encoder.txt
2024-03-24 00:43:34 [2024-03-23 16:43:34,323]::[ModelInstallService]::INFO --> Model download complete: https://huggingface.co/InvokeAI/ip_adapter_sd15/resolve/main/image_encoder.txt
2024-03-24 00:43:35 [2024-03-23 16:43:35,350]::[ModelInstallService]::INFO --> Model download started: https://huggingface.co/InvokeAI/ip_adapter_sd15/resolve/main/ip_adapter.bin
2024-03-24 00:45:30 [2024-03-23 16:45:30,684]::[ModelInstallService]::INFO --> Model download complete: https://huggingface.co/InvokeAI/ip_adapter_sd15/resolve/main/ip_adapter.bin
2024-03-24 00:45:30 [2024-03-23 16:45:30,684]::[ModelInstallService]::INFO --> Model download complete: InvokeAI/ip_adapter_sd15
This should now be resolved after some fixes to the model manager. Also I just tested and I can install InvokeAI/ip_adapter_sd15
successfully.
Is there an existing issue for this?
OS
Windows
GPU
cuda
VRAM
No response
What version did you experience this issue on?
3.6.0rc5
What happened?
when trying using ip adapter so error come Server Error Safetensor Error it working on sdxl but on sd1 not i tried to remove and install again the ipadapter but still not working
Screenshots
Additional context
No response
Contact Details
No response