Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
Some Loras on Flux are not being able to run at all however they do run on forge perfectly, an example:
https://civitai.com/models/789690/abstract-oil-spill-generator
[2024-10-28 18:51:38,714]::[InvokeAI]::ERROR --> Error while invoking session fbb41412-a2df-4784-985f-70ba91d012bc, invocation 852b90b7-8c2d-429a-b847-87977e09c52f (flux_text_encoder):
[2024-10-28 18:51:38,714]::[InvokeAI]::ERROR --> Traceback (most recent call last):
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\services\session_processor\session_processor_default.py", line 129, in run_node
output = invocation.invoke_internal(context=context, services=self._services)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 290, in invoke_internal
output = self.invoke(context)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 51, in invoke
clip_embeddings = self._clip_encode(context)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 100, in _clip_encode
exit_stack.enter_context(
File "C:\Users\Cinem\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 492, in enter_context
result = _cm_type.enter(cm)
File "C:\Users\Cinem\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 135, in enter
return next(self.gen)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\lora\lora_patcher.py", line 42, in apply_lora_patches
for patch, patch_weight in patches:
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 121, in _clip_lora_iterator
lora_info = context.models.load(lora.lora)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\services\shared\invocation_context.py", line 375, in load
return self._services.model_manager.load.load_model(model, _submodel_type)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\services\model_load\model_load_default.py", line 70, in load_model
).load_model(model_config, submodel_type)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 56, in load_model
locker = self._load_and_cache(model_config, submodel_type)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 77, in _load_and_cache
loaded_model = self._load_model(config, submodel_type)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\model_manager\load\model_loaders\lora.py", line 76, in _load_model
model = lora_model_from_flux_diffusers_state_dict(state_dict=state_dict, alpha=None)
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\lora\conversions\flux_diffusers_lora_conversion_utils.py", line 171, in lora_model_from_flux_diffusers_state_dict
add_qkv_lora_layer_if_present(
File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\lora\conversions\flux_diffusers_lora_conversion_utils.py", line 71, in add_qkv_lora_layer_if_present
assert all(keys_present) or not any(keys_present)
AssertionError
And I really don't want to use Forge since I just want everything condensed in Invoke, however these models I train on civitai are not able to run, thank you guys for the hard work!
Is there an existing issue for this problem?
Operating system
Windows
GPU vendor
Nvidia (CUDA)
GPU model
3090
GPU VRAM
24
Version number
5.3.0
Browser
Brave
Python dependencies
No response
What happened
Some Loras on Flux are not being able to run at all however they do run on forge perfectly, an example: https://civitai.com/models/789690/abstract-oil-spill-generator [2024-10-28 18:51:38,714]::[InvokeAI]::ERROR --> Error while invoking session fbb41412-a2df-4784-985f-70ba91d012bc, invocation 852b90b7-8c2d-429a-b847-87977e09c52f (flux_text_encoder): [2024-10-28 18:51:38,714]::[InvokeAI]::ERROR --> Traceback (most recent call last): File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\services\session_processor\session_processor_default.py", line 129, in run_node output = invocation.invoke_internal(context=context, services=self._services) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 290, in invoke_internal output = self.invoke(context) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 51, in invoke clip_embeddings = self._clip_encode(context) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 100, in _clip_encode exit_stack.enter_context( File "C:\Users\Cinem\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 492, in enter_context result = _cm_type.enter(cm) File "C:\Users\Cinem\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 135, in enter return next(self.gen) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\lora\lora_patcher.py", line 42, in apply_lora_patches for patch, patch_weight in patches: File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 121, in _clip_lora_iterator lora_info = context.models.load(lora.lora) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\services\shared\invocation_context.py", line 375, in load return self._services.model_manager.load.load_model(model, _submodel_type) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\services\model_load\model_load_default.py", line 70, in load_model ).load_model(model_config, submodel_type) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 56, in load_model locker = self._load_and_cache(model_config, submodel_type) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 77, in _load_and_cache loaded_model = self._load_model(config, submodel_type) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\model_manager\load\model_loaders\lora.py", line 76, in _load_model model = lora_model_from_flux_diffusers_state_dict(state_dict=state_dict, alpha=None) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\lora\conversions\flux_diffusers_lora_conversion_utils.py", line 171, in lora_model_from_flux_diffusers_state_dict add_qkv_lora_layer_if_present( File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\lora\conversions\flux_diffusers_lora_conversion_utils.py", line 71, in add_qkv_lora_layer_if_present assert all(keys_present) or not any(keys_present) AssertionError
And I really don't want to use Forge since I just want everything condensed in Invoke, however these models I train on civitai are not able to run, thank you guys for the hard work!
What you expected to happen
Please look at it and possibly fix it, thank you!
How to reproduce the problem
No response
Additional context
No response
Discord username
No response