Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
When I generate an image, regardless of the model I select, I get the same error about MetadataIncompleteBuffer
What you expected to happen
A image to generate
How to reproduce the problem
Install schnell model. After it finishes download, backup the download for the main schnell model since it took forever. Then reinstall Invoke. Now install schnell model, and cancel the main item and let the rest of the items it gets finish downloading. Then manually install the schnell that was downloaded before locally. Then generate a model.
Additional context
Stack Trace:
Error
Traceback (most recent call last):
File "D:\AITools\InvokeAI\.venv\lib\site-packages\invokeai\app\services\session_processor\session_processor_default.py", line 129, in run_node
output = invocation.invoke_internal(context=context, services=self._services)
File "D:\AITools\InvokeAI\.venv\lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 290, in invoke_internal
output = self.invoke(context)
File "D:\AITools\InvokeAI\.venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "D:\AITools\InvokeAI\.venv\lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 50, in invoke
t5_embeddings = self._t5_encode(context)
File "D:\AITools\InvokeAI\.venv\lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 61, in _t5_encode
t5_text_encoder_info = context.models.load(self.t5_encoder.text_encoder)
File "D:\AITools\InvokeAI\.venv\lib\site-packages\invokeai\app\services\shared\invocation_context.py", line 370, in load
return self._services.model_manager.load.load_model(model, _submodel_type)
File "D:\AITools\InvokeAI\.venv\lib\site-packages\invokeai\app\services\model_load\model_load_default.py", line 70, in load_model
).load_model(model_config, submodel_type)
File "D:\AITools\InvokeAI\.venv\lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 56, in load_model
locker = self._load_and_cache(model_config, submodel_type)
File "D:\AITools\InvokeAI\.venv\lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 77, in _load_and_cache
loaded_model = self._load_model(config, submodel_type)
File "D:\AITools\InvokeAI\.venv\lib\site-packages\invokeai\backend\model_manager\load\model_loaders\flux.py", line 122, in _load_model
state_dict = load_file(state_dict_path)
File "D:\AITools\InvokeAI\.venv\lib\site-packages\safetensors\torch.py", line 311, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer
Is there an existing issue for this problem?
Operating system
Windows
GPU vendor
Nvidia (CUDA)
GPU model
RTX 3090
GPU VRAM
24gb
Version number
5.0.2
Browser
Chrome
Python dependencies
No response
What happened
When I generate an image, regardless of the model I select, I get the same error about MetadataIncompleteBuffer
What you expected to happen
A image to generate
How to reproduce the problem
Install schnell model. After it finishes download, backup the download for the main schnell model since it took forever. Then reinstall Invoke. Now install schnell model, and cancel the main item and let the rest of the items it gets finish downloading. Then manually install the schnell that was downloaded before locally. Then generate a model.
Additional context
Stack Trace:
Extra data:
Discord username
No response