lllyasviel / Fooocus

Focus on prompting and generating
GNU General Public License v3.0
41.04k stars 5.76k forks source link

[Bug]: custom preset fails #3405

Closed eddyizm closed 2 months ago

eddyizm commented 2 months ago

Checklist

What happened?

preset fails, default or pony works fine. Suspect there is an issue with my preset started in v2.5.1

Steps to reproduce the problem

Here is my custom turbo preset Fooocus starts up fine and sees the checkpoint in the drop down. Verify it exists in the current location where it is being referenced.

($HOME is not what is actually out put, just masking my path/directories for github)

What should have happened?

comparing the default.json preset to mine, i really only see the previous models used as a glaring difference.

It should see the model/file and continue on it's merry way.

What browsers do you use to access Fooocus?

Mozilla Firefox

Where are you running Fooocus?

Locally

What operating system are you using?

mac osx Sonoma 14.5

Console logs

start up args
`[System ARGV] ['$HOME/Fooocus/entry_with_update.py', '--theme=dark', '--always-cpu', '--disable-offload-from-vram', '--listen', '--unet-in-fp8-e5m2', '--preset', 'turbo']`

--------                                                                                                                                                                                                          
Using split attention in VAE                                                                             
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.                                                
Using split attention in VAE                                                                                                                                                                                      
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'}                 
left over keys: dict_keys(['conditioner.embedders.0.logit_scale', 'conditioner.embedders.0.text_projection'])                             
Base model loaded: $HOME/Fooocus/models/checkpoints/DreamShaperXL_Turbo_v2_1.safetensors
VAE loaded: None                                                                                                                                                                                                  
Request to load LoRAs [] for model [$HOME/Fooocus/models/checkpoints/DreamShaperXL_Turbo_v2_1.safetensors].
Fooocus V2 Expansion: Vocab with 642 words.                                                                                                                                                                       
Fooocus Expansion engine loaded for cpu, use_fp16 = False.                                               
Requested to load SDXLClipModel                                                                                                                                                                                   
Requested to load GPT2LMHeadModel
Loading 2 new models                                                                                                                                                                                              
Started worker with PID 94731                                                                            
App started successful. Use the app with http://localhost:7865/ or 0.0.0.0:7865                                                                                                                                   
[Parameters] Adaptive CFG = 7   
[Parameters] CLIP Skip = 2                                                                                                                                                                                        
[Parameters] Sharpness = 3      
[Parameters] ControlNet Softness = 0.25                                                                  
[Parameters] ADM Scale = 1.5 : 0.8 : 0.3                                                                                                                                                                          
[Parameters] Seed = 1093821411714290983                                                                                                                                                                           
[Parameters] CFG = 2                                                                                     
[Fooocus] Downloading control models ...                                                                                                                                                                          
[Fooocus] Loading control models ...                                                                     
[Parameters] Sampler = dpmpp_sde - karras                                                                
[Parameters] Steps = 8 - 15                         
[Fooocus] Initializing ...                                                                                                                                                                                        
[Fooocus] Loading models ...
Refiner unloaded.

Traceback (most recent call last):
  File "$HOME/Fooocus/modules/patch.py", line 465, in loader                    
    result = original_loader(*args, **kwargs)
  File "$HOME/Fooocus/env/lib/python3.10/site-packages/torch/serialization.py", line 998, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "$HOME/Fooocus/env/lib/python3.10/site-packages/torch/serialization.py", line 445, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "$HOME/Fooocus/env/lib/python3.10/site-packages/torch/serialization.py", line 426, in __init__
    super().__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '$HOME/Fooocus/models/checkpoints/DreamShaperXL_Turbo_v2'

During handling of the above exception, another exception occurred:
=
Traceback (most recent call last):
  File "$HOME/Fooocus/modules/async_worker.py", line 1462, in worker
    handler(task)
  File "$HOME/Fooocus/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "$HOME/Fooocus/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "$HOME/Fooocus/modules/async_worker.py", line 1153, in handler
    tasks, use_expansion, loras, current_progress = process_prompt(async_task, async_task.prompt, async_task.negative_prompt,
  File "$HOME/Fooocus/modules/async_worker.py", line 661, in process_prompt
    pipeline.refresh_everything(refiner_model_name=async_task.refiner_model_name,
  File "$HOME/Fooocus/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "$HOME/Fooocus/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "$HOME/Fooocus/modules/default_pipeline.py", line 250, in refresh_everything
    refresh_base_model(base_model_name, vae_name)
  File "$HOME/Fooocus/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "$HOME/Fooocus/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "$HOME/Fooocus/modules/default_pipeline.py", line 74, in refresh_base_model
    model_base = core.load_model(filename, vae_filename)
  File "$HOME/Fooocus/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "$HOME/Fooocus/env/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "$HOME/Fooocus/modules/core.py", line 147, in load_model
    unet, clip, vae, vae_filename, clip_vision = load_checkpoint_guess_config(ckpt_filename, embedding_directory=path_embeddings,
  File "$HOME/Fooocus/ldm_patched/modules/sd.py", line 431, in load_checkpoint_guess_config
    sd = ldm_patched.modules.utils.load_torch_file(ckpt_path)
  File "$HOME/Fooocus/ldm_patched/modules/utils.py", line 22, in load_torch_file
    pl_sd = torch.load(ckpt, map_location=device, pickle_module=ldm_patched.modules.checkpoint_pickle)
  File "$HOME/Fooocus/modules/patch.py", line 481, in loader
    raise ValueError(exp)
ValueError: [Errno 2] No such file or directory: '$HOME/Fooocus/models/checkpoints/DreamShaperXL_Turbo_v2'

Total time: 2.53 seconds


### Additional information

_No response_
mashb1t commented 2 months ago

Strange, works for me without any issues, both Windows 11 and MacOS (case-sensitive volume). Are you using APFS or NTFS / encrypted or unencrypted / case sensitive or insensitive?

console log (2.5.2, upstream)

``` C:\AI\Fooocus\python_embeded\python.exe C:\AI\Fooocus\Fooocus\entry_with_update.py Update failed. 'refs/remotes/origin/main_upstream' Update succeeded. [System ARGV] ['C:\\AI\\Fooocus\\Fooocus\\entry_with_update.py', '--disable-analytics', '--listen', '--always-download-new-model', '--language', 'en'] Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Fooocus version: 2.5.2 [Cleanup] Attempting to delete content of temp dir C:\Users\manue\AppData\Local\Temp\fooocus [Cleanup] Cleanup successful Total VRAM 10240 MB, total RAM 32693 MB xformers version: 0.0.23 Set vram state to: NORMAL_VRAM Always offload VRAM Device: cuda:0 NVIDIA GeForce RTX 3080 : native VAE dtype: torch.bfloat16 Using xformers cross attention Refiner unloaded. Running on local URL: http://0.0.0.0:7865 model_type EPS UNet ADM Dimension 2816 To create a public link, set `share=True` in `launch()`. Using xformers attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using xformers attention in VAE extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Base model loaded: C:\AI\Fooocus\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors VAE loaded: None Request to load LoRAs [('sd_xl_offset_example-lora_1.0.safetensors', 0.1)] for model [C:\AI\Fooocus\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors]. Loaded LoRA [C:\AI\Fooocus\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [C:\AI\Fooocus\Fooocus\models\checkpoints\juggernautXL_v8Rundiffusion.safetensors] with 788 keys at weight 0.1. Fooocus V2 Expansion: Vocab with 642 words. C:\AI\Fooocus\python_embeded\lib\site-packages\torch\_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() return self.fget.__get__(instance, owner)() Fooocus Expansion engine loaded for cuda:0, use_fp16 = True. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models [Fooocus Model Management] Moving model(s) has taken 0.67 seconds Started worker with PID 13040 App started successful. Use the app with http://localhost:7865/ or 0.0.0.0:7865 Loaded preset: C:\AI\Fooocus\Fooocus\presets\turbo_custom.json [Parameters] Adaptive CFG = 7 [Parameters] CLIP Skip = 2 [Parameters] Sharpness = 3 [Parameters] ControlNet Softness = 0.25 [Parameters] ADM Scale = 1.5 : 0.8 : 0.3 [Parameters] Seed = 5965757955318332010 [Parameters] CFG = 2 [Fooocus] Loading control models ... [Parameters] Sampler = dpmpp_sde - karras [Parameters] Steps = 8 - 15 [Fooocus] Initializing ... [Fooocus] Loading models ... Refiner unloaded. model_type EPS UNet ADM Dimension 2816 Using xformers attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using xformers attention in VAE extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} left over keys: dict_keys(['conditioner.embedders.0.logit_scale', 'conditioner.embedders.0.text_projection']) Base model loaded: C:\AI\Fooocus\Fooocus\models\checkpoints\DreamShaperXL_Turbo_v2_1.safetensors VAE loaded: None Request to load LoRAs [] for model [C:\AI\Fooocus\Fooocus\models\checkpoints\DreamShaperXL_Turbo_v2_1.safetensors]. Requested to load SDXLClipModel Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 0.73 seconds [Fooocus] Processing prompts ... [Fooocus] Encoding positive #1 ... [Fooocus] Encoding negative #1 ... [Parameters] Denoising Strength = 1.0 [Parameters] Initial Latent shape: Image Space (1024, 1024) Preparation time: 8.64 seconds Using karras scheduler. [Fooocus] Preparing task 1/1 ... [Sampler] refiner_swap_method = joint [Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828 Requested to load SDXL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 1.19 seconds 100%|██████████| 8/8 [00:06<00:00, 1.29it/s] Requested to load AutoencoderKL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 1.64 seconds [Fooocus] Saving image 1/1 to system ... Image generated with private log at: C:\AI\Fooocus\Fooocus\outputs\2024-07-29\log.html Generating and saving time: 10.25 seconds [Enhance] Skipping, preconditions aren't met Processing time (total): 10.25 seconds Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models Total time: 18.93 seconds [Fooocus Model Management] Moving model(s) has taken 0.40 seconds ```

console log (2.6.2, my fork)

``` [Fooocus Model Management] Moving model(s) has taken 0.67 seconds Started worker with PID 6888 App started successful. Use the app with http://localhost:7865/ or 0.0.0.0:7865 Loaded preset: C:\AI\Fooocus\Fooocus\presets\turbo_custom.json Downloading: "https://huggingface.co/Lykon/dreamshaper-xl-v2-turbo/resolve/main/DreamShaperXL_Turbo_v2_1.safetensors" to C:\AI\Fooocus\Fooocus\models\checkpoints\DreamShaperXL_Turbo_v2_1.safetensors 100%|██████████| 6.46G/6.46G [01:16<00:00, 91.3MB/s] [Parameters] Adaptive CFG = 7 [Parameters] CLIP Skip = 2 [Parameters] Sharpness = 3 [Parameters] ControlNet Softness = 0.25 [Parameters] ADM Scale = 1.5 : 0.8 : 0.3 [Parameters] Seed = 2207798739955052681 [Parameters] CFG = 2 [Fooocus] Loading control models ... [Parameters] Sampler = dpmpp_sde - karras [Parameters] Steps = 8 - 15 [Fooocus] Initializing ... [Fooocus] Loading models ... Refiner unloaded. model_type EPS UNet ADM Dimension 2816 Using xformers attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using xformers attention in VAE extra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_l.text_projection'} left over keys: dict_keys(['conditioner.embedders.0.logit_scale', 'conditioner.embedders.0.text_projection']) Base model loaded: C:\AI\Fooocus\Fooocus\models\checkpoints\DreamShaperXL_Turbo_v2_1.safetensors VAE loaded: None Request to load LoRAs [] for model [C:\AI\Fooocus\Fooocus\models\checkpoints\DreamShaperXL_Turbo_v2_1.safetensors]. Requested to load SDXLClipModel Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 1.75 seconds [Fooocus] Processing prompts ... [Fooocus] Encoding positive #1 ... [Fooocus] Encoding negative #1 ... [Parameters] Denoising Strength = 1.0 [Parameters] Initial Latent shape: Image Space (1024, 1024) Preparation time: 11.29 seconds Using karras scheduler. [Fooocus] Preparing task 1/1 ... [Sampler] refiner_swap_method = joint [Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828 Requested to load SDXL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 2.25 seconds 100%|██████████| 8/8 [00:05<00:00, 1.36it/s] Requested to load AutoencoderKL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 1.63 seconds [Fooocus] Saving image 1/1 to system ... [Cache] Calculating sha256 for C:\AI\Fooocus\Fooocus\models\checkpoints\DreamShaperXL_Turbo_v2_1.safetensors [Cache] sha256 for C:\AI\Fooocus\Fooocus\models\checkpoints\DreamShaperXL_Turbo_v2_1.safetensors: 4496b36d48 Image generated with private log at: C:\AI\Fooocus\Fooocus\outputs\2024-07-29\log.html Generating and saving time: 16.15 seconds [Enhance] Skipping, preconditions aren't met Processing time (total): 16.15 seconds Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models Total time: 27.46 seconds [Fooocus Model Management] Moving model(s) has taken 0.59 seconds ```

Also works on Colab btw, so i assume it's an issue with your filesystem or the naming of the model file. Have you tried deleting the model and redownloading / 1:1 renaming as in your preset?

eddyizm commented 2 months ago

Very strange indeed. Nothing changed with my system as far as I can tell and I am having the same issue on MacOS and Windows 10.

Appreciate you validating that it is working everywhere else so that helps me determine the issue is on my system.