[ ] The issue exists on a clean installation of Fooocus
[ ] The issue exists in the current version of Fooocus
[ ] The issue has not been reported before recently
[ ] The issue has been reported before but has not been fixed yet
What happened?
Hey guys, I've been running the program on Colab, I've been trying to use different Loras, but when I try to generate an image I get this error:
ValueError: Error while deserializing header: MetadataIncompleteBuffer
File corrupted: /content/Fooocus/models/loras/2FingersSDXL_v03.safetensors
Fooocus has tried to move the corrupted file to /content/Fooocus/models/loras/2FingersSDXL_v03.safetensors.corrupted
You may try again now and Fooocus will download models again.
I just added the loras file on the loras folder inside models, then select it on the dropdown menu in models, tried to generate the image, but didn't get any result and the above error message in the console.
Any help would be really appreciated, thank you
Steps to reproduce the problem
open the project in Colab
upload a loras file into the loras folder inside models
select the model in models tab
try to generate an image triggering the lora
What should have happened?
Image should be generated
What browsers do you use to access Fooocus?
Google Chrome
Where are you running Fooocus?
Cloud (Google Colab)
What operating system are you using?
No response
Console logs
Requirement already satisfied: pygit2==1.15.1 in /usr/local/lib/python3.10/dist-packages (1.15.1)
Requirement already satisfied: cffi>=1.16.0 in /usr/local/lib/python3.10/dist-packages (from pygit2==1.15.1) (1.16.0)
Requirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.16.0->pygit2==1.15.1) (2.22)
/content
fatal: destination path 'Fooocus' already exists and is not an empty directory.
/content/Fooocus
Already up-to-date
Update succeeded.
[System ARGV] ['entry_with_update.py', '--share', '--always-high-vram', '--preset', 'realistic']
Python 3.10.12 (main, Mar 22 2024, 16:50:05) [GCC 11.4.0]
Fooocus version: 2.5.2
Loaded preset: /content/Fooocus/presets/realistic.json
[Cleanup] Attempting to delete content of temp dir /tmp/fooocus
[Cleanup] Cleanup successful
Total VRAM 15102 MB, total RAM 12979 MB
Set vram state to: HIGH_VRAM
Always offload VRAM
Device: cuda:0 Tesla T4 : native
VAE dtype: torch.float32
Using pytorch cross attention
Refiner unloaded.
IMPORTANT: You are using gradio version 3.41.2, however version 4.29.0 is available, please upgrade.
--------
Running on local URL: http://127.0.0.1:7865
Running on public URL: https://ae1dcf11f7e10ab2fc.gradio.live
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'}
left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids'])
loaded straight to GPU
Requested to load SDXL
Loading 1 new model
Base model loaded: /content/Fooocus/models/checkpoints/realisticStockPhoto_v20.safetensors
VAE loaded: None
Request to load LoRAs [('SDXL_FILM_PHOTOGRAPHY_STYLE_V1.safetensors', 0.25)] for model [/content/Fooocus/models/checkpoints/realisticStockPhoto_v20.safetensors].
Loaded LoRA [/content/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_V1.safetensors] for UNet [/content/Fooocus/models/checkpoints/realisticStockPhoto_v20.safetensors] with 722 keys at weight 0.25.
Loaded LoRA [/content/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_V1.safetensors] for CLIP [/content/Fooocus/models/checkpoints/realisticStockPhoto_v20.safetensors] with 264 keys at weight 0.25.
Fooocus V2 Expansion: Vocab with 642 words.
Fooocus Expansion engine loaded for cuda:0, use_fp16 = True.
Requested to load SDXLClipModel
Requested to load GPT2LMHeadModel
Loading 2 new models
[Fooocus Model Management] Moving model(s) has taken 0.96 seconds
2024-07-29 12:59:45.390974: E external/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:9261] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
2024-07-29 12:59:45.391026: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:607] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
2024-07-29 12:59:45.396959: E external/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1515] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-07-29 12:59:45.414854: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2024-07-29 12:59:47.414965: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
Started worker with PID 9208
App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or https://ae1dcf11f7e10ab2fc.gradio.live
[Parameters] Adaptive CFG = 7
[Parameters] CLIP Skip = 2
[Parameters] Sharpness = 2
[Parameters] ControlNet Softness = 0.25
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3
[Parameters] Seed = 6008273554411586700
[Parameters] CFG = 3
[Fooocus] Loading control models ...
[Parameters] Sampler = dpmpp_2m_sde_gpu - karras
[Parameters] Steps = 30 - 15
[Fooocus] Initializing ...
[Fooocus] Loading models ...
Refiner unloaded.
Request to load LoRAs [('SDXL_FILM_PHOTOGRAPHY_STYLE_V1.safetensors', 0.25), ('2FingersSDXL_v03.safetensors', 1.0)] for model [/content/Fooocus/models/checkpoints/realisticStockPhoto_v20.safetensors].
Loaded LoRA [/content/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_V1.safetensors] for UNet [/content/Fooocus/models/checkpoints/realisticStockPhoto_v20.safetensors] with 722 keys at weight 0.25.
Loaded LoRA [/content/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_V1.safetensors] for CLIP [/content/Fooocus/models/checkpoints/realisticStockPhoto_v20.safetensors] with 264 keys at weight 0.25.
Traceback (most recent call last):
File "/content/Fooocus/modules/patch.py", line 465, in loader
result = original_loader(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/safetensors/torch.py", line 311, in load_file
with safe_open(filename, framework="pt", device=device) as f:
safetensors_rust.SafetensorError: Error while deserializing header: HeaderTooLarge
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/content/Fooocus/modules/async_worker.py", line 1469, in worker
handler(task)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/Fooocus/modules/async_worker.py", line 1160, in handler
tasks, use_expansion, loras, current_progress = process_prompt(async_task, async_task.prompt, async_task.negative_prompt,
File "/content/Fooocus/modules/async_worker.py", line 661, in process_prompt
pipeline.refresh_everything(refiner_model_name=async_task.refiner_model_name,
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/Fooocus/modules/default_pipeline.py", line 252, in refresh_everything
refresh_loras(loras, base_model_additional_loras=base_model_additional_loras)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/Fooocus/modules/default_pipeline.py", line 139, in refresh_loras
model_base.refresh_loras(loras + base_model_additional_loras)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/content/Fooocus/modules/core.py", line 96, in refresh_loras
lora_unmatch = ldm_patched.modules.utils.load_torch_file(lora_filename, safe_load=False)
File "/content/Fooocus/ldm_patched/modules/utils.py", line 13, in load_torch_file
sd = safetensors.torch.load_file(ckpt, device=device.type)
File "/content/Fooocus/modules/patch.py", line 481, in loader
raise ValueError(exp)
ValueError: Error while deserializing header: HeaderTooLarge
File corrupted: /content/Fooocus/models/loras/2FingersSDXL_v03.safetensors
Fooocus has tried to move the corrupted file to /content/Fooocus/models/loras/2FingersSDXL_v03.safetensors.corrupted
You may try again now and Fooocus will download models again.
Total time: 9.07 seconds
Checklist
What happened?
Hey guys, I've been running the program on Colab, I've been trying to use different Loras, but when I try to generate an image I get this error:
ValueError: Error while deserializing header: MetadataIncompleteBuffer File corrupted: /content/Fooocus/models/loras/2FingersSDXL_v03.safetensors Fooocus has tried to move the corrupted file to /content/Fooocus/models/loras/2FingersSDXL_v03.safetensors.corrupted You may try again now and Fooocus will download models again.
I just added the loras file on the loras folder inside models, then select it on the dropdown menu in models, tried to generate the image, but didn't get any result and the above error message in the console.
Any help would be really appreciated, thank you
Steps to reproduce the problem
What should have happened?
Image should be generated
What browsers do you use to access Fooocus?
Google Chrome
Where are you running Fooocus?
Cloud (Google Colab)
What operating system are you using?
No response
Console logs
Additional information
No response