MoonRide303 / Fooocus-MRE

Focus on prompting and generating
GNU General Public License v3.0
948 stars 56 forks source link

Error while deserializing header: MetadataIncompleteBuffer #73

Open kkget opened 1 year ago

kkget commented 1 year ago

image

MoonRide303 commented 1 year ago

It seems you have damaged / incomplete base SDXL model file. Please verify the SHA256 hash, like that (command line / powershell in models/checkpoints folder):

CertUtil -hashfile sd_xl_base_1.0_0.9vae.safetensors SHA256
SHA256 hash of sd_xl_base_1.0_0.9vae.safetensors:
e6bb9ea85bbf7bf6478a7c6d18b71246f22e95d41bcdd80ed40aa212c33cfeff
CertUtil: -hashfile command completed successfully.

If you will see different hash, then you need to download it, again.

kkget commented 1 year ago

ok

AKDigitalAgency commented 1 year ago

"MetadataIncompleteBuffer" I got this message too. I followed above advice and deleted both base and refiner models, then downloaded them again from Civit. Unfortunately, still getting same error 😤😭 I even tried reinstalling Fooocus-MRe but that didn't work either 🤯 Is there anything else that causes this error and I can try to fix it?

MoonRide303 commented 1 year ago

@AKDigitalAgency It might be a problem with different model file - can you provide full error message?

AKDigitalAgency commented 1 year ago

Hi, I've old laptop with Intel HD520 internal GPU so Foocus-MRE runs on CPU only :( I'm running it inside Stable Matrix GUI. It starts up an opens webUI but when start Generation get following error:

*Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]Fooocus version: 2.0.78.5 MREInference Engine exists.Inference Engine checkout finished.Total VRAM 16244 MB, total RAM 16244 MBWARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.0.1+cu118 with CUDA 1108 (you have 2.0.1+cpu) Python 3.10.11 (you have 3.10.11) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more detailsxformers version: 0.0.21Forcing FP32, if this improves things please report it.Set vram state to: DISABLEDDisabling smart memory managementDevice: cpuVAE dtype: torch.float32Using split optimization for cross attentionRunning on local URL: http:// <http://>To create a public link, set share=True in launch().Fooocus Text Processing Pipelines are retargeted to cpu[Virtual Memory System] Forced = False[Virtual Memory System] Logic target is CPU, memory = 16243.71875[Virtual Memory System] Activated = Falsemodel_type EPSadm 2560making attention of type 'vanilla' with 512 in_channelsWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.making attention of type 'vanilla' with 512 in_channelsmissing {'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}Refiner model loaded: D:\AIs\Stability Matrix\Data\Models\StableDiffusion\sd_xl_refiner_1.0_0.9vae.safetensorsException in thread Thread-2 (worker):Traceback (most recent call last): File "threading.py", line 1016, in _bootstrap_inner File "threading.py", line 953, in run File "D:\AIs\Stability Matrix\Data\Packages\Fooocus-MRE\modules\async_worker.py", line 19, in worker import modules.default_pipeline as pipeline File "D:\AIs\Stability Matrix\Data\Packages\Fooocus-MRE\modules\default_pipeline.py", line 310, in

refresh_everything( File "D:\AIs\Stability Matrix\Data\Packages\Fooocus-MRE\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\AIs\Stability Matrix\Data\Packages\Fooocus-MRE\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\AIs\Stability Matrix\Data\Packages\Fooocus-MRE\modules\default_pipeline.py", line 302, in refresh_everything refresh_base_model(base_model_name) File "D:\AIs\Stability Matrix\Data\Packages\Fooocus-MRE\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\AIs\Stability Matrix\Data\Packages\Fooocus-MRE\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\AIs\Stability Matrix\Data\Packages\Fooocus-MRE\modules\default_pipeline.py", line 52, in refresh_base_model xl_base = core.load_model(filename) File "D:\AIs\Stability Matrix\Data\Packages\Fooocus-MRE\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\AIs\Stability Matrix\Data\Packages\Fooocus-MRE\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\AIs\Stability Matrix\Data\Packages\Fooocus-MRE\modules\core.py", line 63, in load_model unet, clip, vae, clip_vision = load_checkpoint_guess_config(ckpt_filename, embedding_directory=embeddings_path) File "D:\AIs\Stability Matrix\Data\Packages\Fooocus-MRE\repositories\ComfyUI-from-StabilityAI-Official\comfy\sd.py", line 396, in load_checkpoint_guess_config sd = comfy.utils.load_torch_file(ckpt_path) File "D:\AIs\Stability Matrix\Data\Packages\Fooocus-MRE\repositories\ComfyUI-from-StabilityAI-Official\comfy\utils.py", line 13, in load_torch_file sd = safetensors.torch.load_file(ckpt, device=device.type) File "D:\AIs\Stability Matrix\Data\Packages\Fooocus-MRE\venv\lib\site-packages\safetensors\torch.py", line 259, in load_file with safe_open(filename, framework="pt", device=device) as f:safetensors_rust.SafetensorError: Error while deserializing header: MetadataIncompleteBuffer* How do I fix it? Also, tried using differnet SDXL & 1.5 models with same error:( BTW I can run ComfyUI with SDXL slow 25-30min/image but working! On Thu, Oct 5, 2023 at 1:55 AM MoonRide303 ***@***.***> wrote: > @AKDigitalAgency It might be a > problem with different model file - can you provide full error message? > > — > Reply to this email directly, view it on GitHub > , > or unsubscribe > > . > You are receiving this because you were mentioned.Message ID: > ***@***.***> >
AKDigitalAgency commented 1 year ago

Have you figured out how to fix this error and make it work?