lllyasviel / Fooocus

Focus on prompting and generating
GNU General Public License v3.0
40.11k stars 5.56k forks source link

An user warning #1487

Closed vanguard-bit closed 8 months ago

vanguard-bit commented 9 months ago

[Fooocus Model Management] Moving model(s) has taken 3.30 seconds 0%| | 0/60 [00:00<?, ?it/s]C:\Users\Prajwal\stable_diffusion\Fooocus\ldm_patched\ldm\modules\attention.py:318: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:281.) out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)

it just throws this warning should i be worried about this?(The software is running without a hitch)

mashb1t commented 8 months ago

Please provide your more information such as the full terminal output as well as the version of torch you're using.

vanguard-bit commented 8 months ago

C:\Fooocus>python3 entry_with_update.py Already up-to-date Update succeeded. [System ARGV] ['entry_with_update.py'] Python 3.11.7 (tags/v3.11.7:fa7a6f2, Dec 4 2023, 19:24:49) [MSC v.1937 64 bit (AMD64)] Fooocus version: 2.1.856 Running on local URL: http://127.0.0.1:7865

To create a public link, set share=True in launch(). Total VRAM 4096 MB, total RAM 15711 MB Trying to enable lowvram mode because your GPU seems to have 4GB or less. If you don't want this use: --always-normal-vram Set vram state to: LOW_VRAM Always offload VRAM Device: cuda:0 NVIDIA GeForce RTX 3050 Laptop GPU : native VAE dtype: torch.bfloat16 Using pytorch cross attention Refiner unloaded. model_type EPS UNet ADM Dimension 2816 Using pytorch attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using pytorch attention in VAE extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'} Base model loaded: C:\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors Request to load LoRAs [['sd_xl_offset_example-lora_1.0.safetensors', 0.1], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [C:\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors]. Loaded LoRA [C:\Fooocus\models\loras\sd_xl_offset_example-lora_1.0.safetensors] for UNet [C:\Fooocus\models\checkpoints\juggernautXL_version6Rundiffusion.safetensors] with 788 keys at weight 0.1. Fooocus V2 Expansion: Vocab with 642 words. Fooocus Expansion engine loaded for cpu, use_fp16 = False. Requested to load SDXLClipModel Requested to load GPT2LMHeadModel Loading 2 new models App started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 [Parameters] Adaptive CFG = 7 [Parameters] Sharpness = 2 [Parameters] ADM Scale = 1.5 : 0.8 : 0.3 [Parameters] CFG = 4.0 [Parameters] Seed = 9213812642016021839 [Parameters] Sampler = dpmpp_3m_sde_gpu - karras [Parameters] Steps = 60 - 30 [Fooocus] Initializing ... [Fooocus] Loading models ... Refiner unloaded. [Fooocus] Processing prompts ... [Fooocus] Encoding positive #1 ... [Fooocus] Encoding positive #2 ... [Fooocus] Encoding negative #1 ... [Fooocus] Encoding negative #2 ... [Parameters] Denoising Strength = 1.0 [Parameters] Initial Latent shape: Image Space (896, 1152) Preparation time: 14.46 seconds [Sampler] refiner_swap_method = joint [Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828 Requested to load SDXL Loading 1 new model loading in lowvram mode 1753.544020652771 [Fooocus Model Management] Moving model(s) has taken 38.96 seconds 0%| | 0/60 [00:00<?, ?it/s]C:\Fooocus\ldm_patched\ldm\modules\attention.py:318: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:281.) out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False) 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 60/60 [03:38<00:00, 3.65s/it] Requested to load AutoencoderKL Loading 1 new model [Fooocus Model Management] Moving model(s) has taken 1.76 seconds Image generated with private log at: C:\Fooocus\outputs\2023-12-29\log.html Generating and saving time: 265.21 seconds

torch version: torch -2.3.0.dev20231214+cu121

mashb1t commented 8 months ago

@vanguard-bit You seem to use the dev version of torch, but there is nothing to worry about. For me it works without errors using torch 2.1.0 and cuda 12.1. Feel free to downgrade and check if this is beneficial for you.

vanguard-bit commented 8 months ago

Ok will try that out. Thank you