Open kkget opened 1 year ago
It seems you have damaged / incomplete base SDXL model file. Please verify the SHA256 hash, like that (command line / powershell in models/checkpoints folder):
CertUtil -hashfile sd_xl_base_1.0_0.9vae.safetensors SHA256
SHA256 hash of sd_xl_base_1.0_0.9vae.safetensors:
e6bb9ea85bbf7bf6478a7c6d18b71246f22e95d41bcdd80ed40aa212c33cfeff
CertUtil: -hashfile command completed successfully.
If you will see different hash, then you need to download it, again.
ok
"MetadataIncompleteBuffer" I got this message too. I followed above advice and deleted both base and refiner models, then downloaded them again from Civit. Unfortunately, still getting same error 😤😠I even tried reinstalling Fooocus-MRe but that didn't work either 🤯 Is there anything else that causes this error and I can try to fix it?
@AKDigitalAgency It might be a problem with different model file - can you provide full error message?
Hi, I've old laptop with Intel HD520 internal GPU so Foocus-MRE runs on CPU only :( I'm running it inside Stable Matrix GUI. It starts up an opens webUI but when start Generation get following error:
*Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929
64 bit (AMD64)]Fooocus version: 2.0.78.5 MREInference Engine
exists.Inference Engine checkout finished.Total VRAM 16244 MB, total RAM
16244 MBWARNING[XFORMERS]: xFormers can't load C++/CUDA extensions.
xFormers was built for: PyTorch 2.0.1+cu118 with CUDA 1108 (you have
2.0.1+cpu) Python 3.10.11 (you have 3.10.11) Please reinstall xformers
(see https://github.com/facebookresearch/xformers#installing-xformers
https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more detailsxformers version: 0.0.21Forcing
FP32, if this improves things please report it.Set vram state to:
DISABLEDDisabling smart memory managementDevice: cpuVAE dtype:
torch.float32Using split optimization for cross attentionRunning on local
URL: http:// <http://>To create a public link,
set share=True
in launch()
.Fooocus Text Processing Pipelines are
retargeted to cpu[Virtual Memory System] Forced = False[Virtual Memory
System] Logic target is CPU, memory = 16243.71875[Virtual Memory System]
Activated = Falsemodel_type EPSadm 2560making attention of type 'vanilla'
with 512 in_channelsWorking with z of shape (1, 4, 32, 32) = 4096
dimensions.making attention of type 'vanilla' with 512 in_channelsmissing
{'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}Refiner
model loaded: D:\AIs\Stability
Matrix\Data\Models\StableDiffusion\sd_xl_refiner_1.0_0.9vae.safetensorsException
in thread Thread-2 (worker):Traceback (most recent call last): File
"threading.py", line 1016, in _bootstrap_inner File "threading.py", line
953, in run File "D:\AIs\Stability
Matrix\Data\Packages\Fooocus-MRE\modules\async_worker.py", line 19, in
worker import modules.default_pipeline as pipeline File
"D:\AIs\Stability
Matrix\Data\Packages\Fooocus-MRE\modules\default_pipeline.py", line 310, in
Have you figured out how to fix this error and make it work?