lllyasviel / stable-diffusion-webui-forge

GNU Affero General Public License v3.0
8.55k stars 839 forks source link

flux1-schnell-(ANY).gguf using this throws following error. #1396

Open VeteranXT opened 3 months ago

VeteranXT commented 3 months ago
Loading Model: {'checkpoint_info': {'filename': 'E:\\Storage\\Apps\\AI_Geneartor\\stable-diffusion-webui-amdgpu-forge\\models\\Stable-diffusion\\flux1-schnell-Q4_1.gguf', 'hash': '00ccfa75'}, 'additional_modules': ['E:\\Storage\\Apps\\AI_Geneartor\\stable-diffusion-webui-amdgpu-forge\\models\\VAE\\ae.safetensors', 'E:\\Storage\\Apps\\AI_Geneartor\\stable-diffusion-webui-amdgpu-forge\\models\\text_encoder\\clip_l.safetensors', 'E:\\Storage\\Apps\\AI_Geneartor\\stable-diffusion-webui-amdgpu-forge\\models\\text_encoder\\t5xxl_fp8_e4m3fn.safetensors'], 'unet_storage_dtype': None}
Traceback (most recent call last):
  File "E:\Storage\Apps\AI_Geneartor\stable-diffusion-webui-amdgpu-forge\modules_forge\main_thread.py", line 30, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "E:\Storage\Apps\AI_Geneartor\stable-diffusion-webui-amdgpu-forge\modules\txt2img.py", line 110, in txt2img_function
    processed = processing.process_images(p)
  File "E:\Storage\Apps\AI_Geneartor\stable-diffusion-webui-amdgpu-forge\modules\processing.py", line 790, in process_images
    p.sd_model, just_reloaded = forge_model_reload()
  File "E:\Storage\Apps\AI_Geneartor\stable-diffusion-webui-amdgpu-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "E:\Storage\Apps\AI_Geneartor\stable-diffusion-webui-amdgpu-forge\modules\sd_models.py", line 501, in forge_model_reload
    sd_model = forge_loader(state_dict, additional_state_dicts=additional_state_dicts)
  File "E:\Storage\Apps\AI_Geneartor\stable-diffusion-webui-amdgpu-forge\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "E:\Storage\Apps\AI_Geneartor\stable-diffusion-webui-amdgpu-forge\backend\loader.py", line 261, in forge_loader
    component = load_huggingface_component(estimated_config, component_name, lib_name, cls_name, local_path, component_sd)
  File "E:\Storage\Apps\AI_Geneartor\stable-diffusion-webui-amdgpu-forge\backend\loader.py", line 56, in load_huggingface_component
    load_state_dict(model, state_dict, ignore_start='loss.')
  File "E:\Storage\Apps\AI_Geneartor\stable-diffusion-webui-amdgpu-forge\backend\state_dict.py", line 5, in load_state_dict
    missing, unexpected = model.load_state_dict(sd, strict=False)
  File "E:\Storage\Apps\AI_Geneartor\stable-diffusion-webui-amdgpu-forge\venv\lib\site-packages\torch\nn\modules\module.py", line 2189, in load_state_dict
    raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for IntegratedAutoencoderKL:
        size mismatch for encoder.conv_out.weight: copying a param with shape torch.Size([8, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 512, 3, 3]).
        size mismatch for encoder.conv_out.bias: copying a param with shape torch.Size([8]) from checkpoint, the shape in current model is torch.Size([32]).
        size mismatch for decoder.conv_in.weight: copying a param with shape torch.Size([512, 4, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 16, 3, 3]).
Error(s) in loading state_dict for IntegratedAutoencoderKL:
        size mismatch for encoder.conv_out.weight: copying a param with shape torch.Size([8, 512, 3, 3]) from checkpoint, the shape in current model is torch.Size([32, 512, 3, 3]).
        size mismatch for encoder.conv_out.bias: copying a param with shape torch.Size([8]) from checkpoint, the shape in current model is torch.Size([32]).
        size mismatch for decoder.conv_in.weight: copying a param with shape torch.Size([512, 4, 3, 3]) from checkpoint, the shape in current model is torch.Size([512, 16, 3, 3]).
VeteranXT commented 3 months ago

I know im using Different Fork but it follows this as downstream just for AMD.

russjr08 commented 4 days ago

This happens to me when I try to use Flux without deselecting any SD/SDXL stuff, such as sdxl_vae.safetensors. After deselecting that, I was able to run Flux models in SafeTensor formats without further errors (though I haven't tried with GGuF based formats).

VeteranXT commented 4 days ago

GGUF use clip_G, flux vae, and t5xx. Use those cuz i tried to use SD3,5 Vae and got black or messy images.