invoke-ai / InvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
23.77k stars 2.44k forks source link

[bug]: (specific) Flux Lora's causing AssertionError #7160

Open AXOca opened 1 month ago

AXOca commented 1 month ago

Is there an existing issue for this problem?

Operating system

Windows

GPU vendor

Nvidia (CUDA)

GPU model

4090

GPU VRAM

24

Version number

5.2.0

Browser

Firefox 131.0.3

Python dependencies

No response

What happened

I've added a Lora from DrMando, used the Flux Dev Quantized model (as provided by InvokeAI) added a prompt, and hit generate. Which caused this error:

[2024-10-22 04:57:50,535]::[InvokeAI]::ERROR --> Error while invoking session fe0e6b5c-0f6f-4256-91c8-750d9317b346, invocation d5c79114-75f9-4530-a101-e0d28c4c9b0c (flux_text_encoder): [2024-10-22 04:57:50,535]::[InvokeAI]::ERROR --> Traceback (most recent call last): File "G:\Data\Packages\InvokeAI\invokeai\app\services\session_processor\session_processor_default.py", line 129, in run_node output = invocation.invoke_internal(context=context, services=self._services) File "G:\Data\Packages\InvokeAI\invokeai\app\invocations\baseinvocation.py", line 290, in invoke_internal output = self.invoke(context) File "G:\Data\Packages\InvokeAI\venv\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "G:\Data\Packages\InvokeAI\invokeai\app\invocations\flux_text_encoder.py", line 51, in invoke clip_embeddings = self._clip_encode(context) File "G:\Data\Packages\InvokeAI\invokeai\app\invocations\flux_text_encoder.py", line 100, in _clip_encode exit_stack.enter_context( File "contextlib.py", line 492, in enter_context File "contextlib.py", line 135, in __enter__ File "G:\Data\Packages\InvokeAI\invokeai\backend\lora\lora_patcher.py", line 42, in apply_lora_patches for patch, patch_weight in patches: File "G:\Data\Packages\InvokeAI\invokeai\app\invocations\flux_text_encoder.py", line 121, in _clip_lora_iterator lora_info = context.models.load(lora.lora) File "G:\Data\Packages\InvokeAI\invokeai\app\services\shared\invocation_context.py", line 370, in load return self._services.model_manager.load.load_model(model, _submodel_type) File "G:\Data\Packages\InvokeAI\invokeai\app\services\model_load\model_load_default.py", line 70, in load_model ).load_model(model_config, submodel_type) File "G:\Data\Packages\InvokeAI\invokeai\backend\model_manager\load\load_default.py", line 56, in load_model locker = self._load_and_cache(model_config, submodel_type) File "G:\Data\Packages\InvokeAI\invokeai\backend\model_manager\load\load_default.py", line 77, in _load_and_cache loaded_model = self._load_model(config, submodel_type) File "G:\Data\Packages\InvokeAI\invokeai\backend\model_manager\load\model_loaders\lora.py", line 76, in _load_model model = lora_model_from_flux_diffusers_state_dict(state_dict=state_dict, alpha=None) File "G:\Data\Packages\InvokeAI\invokeai\backend\lora\conversions\flux_diffusers_lora_conversion_utils.py", line 171, in lora_model_from_flux_diffusers_state_dict add_qkv_lora_layer_if_present( File "G:\Data\Packages\InvokeAI\invokeai\backend\lora\conversions\flux_diffusers_lora_conversion_utils.py", line 71, in add_qkv_lora_layer_if_present assert all(keys_present) or not any(keys_present) AssertionError

What you expected to happen

I've expected it to work, not exactly sure what to write here.

How to reproduce the problem

No response

Additional context

I've asked DrMando and if he (hopefully) responds, I can attach more info to what he used to train it or where they train it.

Discord username

No response

AXOca commented 1 month ago

For training they use CivitAI's Flux Training, especially the "rapid" setting.

freelancer2000 commented 1 month ago

I had this also happen when using specific models (checkpoints) of FLUX. Which one are you using?

prairiefawkes commented 2 weeks ago

I'm getting this error too, not only for this lora (https://civitai.com/models/781497/dreamy-illustrations-flux-lora?modelVersionId=873972) but also a Lora of my friend that I trained myself on Civitai using rapid lora.