invoke-ai / InvokeAI

Invoke is a leading creative engine for Stable Diffusion models, empowering professionals, artists, and enthusiasts to generate and create visual media using the latest AI-driven technologies. The solution offers an industry leading WebUI, and serves as the foundation for multiple commercial products.
https://invoke-ai.github.io/InvokeAI/
Apache License 2.0
23.46k stars 2.42k forks source link

[enhancement]: More LORA For flux support. #7092

Open m4iccc opened 1 week ago

m4iccc commented 1 week ago

Is there an existing issue for this?

Contact Details

No response

What should this feature add?

I'm trying to add a Lora safetensor which I trained on citvitai using the Fast Flux Lora training, however invoke seems to have some issue loading it, please expand on the lora compatibility for Flux, ty!

The error log is in the comments bellow:

Alternatives

No response

Additional Content

No response

mrudat commented 1 week ago

Do you get the error "Failed: Unknown LoRA type: "? Is it triggered by, for example, SameFace fix [Flux Lora]? It's only 4.5MB, so it might be a reasonable test case.

m4iccc commented 1 week ago

It says the following in red colored letters: [EDIT: I've changed the old error log for the actual error it produces with this Lora]

[2024-10-18 09:27:23,105]::[InvokeAI]::ERROR --> Error while invoking session f92f86cd-4b7e-4d07-b9d8-b2d040cb8753, invocation abd1c755-c963-43af-86da-3325f0649d9d (flux_text_encoder): [2024-10-18 09:27:23,106]::[InvokeAI]::ERROR --> Traceback (most recent call last): File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\services\session_processor\session_processor_default.py", line 129, in run_node output = invocation.invoke_internal(context=context, services=self._services) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\invocations\baseinvocation.py", line 290, in invoke_internal output = self.invoke(context) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\torch\utils_contextlib.py", line 116, in decorate_context return func(*args, **kwargs) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 51, in invoke clip_embeddings = self._clip_encode(context) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 100, in _clip_encode exit_stack.enter_context( File "C:\Users\Cinem\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 492, in enter_context result = _cm_type.enter(cm) File "C:\Users\Cinem\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 135, in enter return next(self.gen) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\lora\lora_patcher.py", line 42, in apply_lora_patches for patch, patch_weight in patches: File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\invocations\flux_text_encoder.py", line 121, in _clip_lora_iterator lora_info = context.models.load(lora.lora) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\services\shared\invocation_context.py", line 370, in load return self._services.model_manager.load.load_model(model, _submodel_type) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\app\services\model_load\model_load_default.py", line 70, in load_model ).load_model(model_config, submodel_type) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 56, in load_model locker = self._load_and_cache(model_config, submodel_type) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\model_manager\load\load_default.py", line 77, in _load_and_cache loaded_model = self._load_model(config, submodel_type) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\model_manager\load\model_loaders\lora.py", line 76, in _load_model model = lora_model_from_flux_diffusers_state_dict(state_dict=state_dict, alpha=None) File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\lora\conversions\flux_diffusers_lora_conversion_utils.py", line 171, in lora_model_from_flux_diffusers_state_dict add_qkv_lora_layer_if_present( File "C:\Users\Cinem\Downloads\InvokeAI.venv\lib\site-packages\invokeai\backend\lora\conversions\flux_diffusers_lora_conversion_utils.py", line 71, in add_qkv_lora_layer_if_present assert all(keys_present) or not any(keys_present) AssertionError

psychedelicious commented 5 days ago

Can you please link to an affected LoRA so we have something to test with?

m4iccc commented 4 days ago

https://civitai.com/models/789690/abstract-oil-spill-generator

Thank you guys!

On Wed, Oct 16, 2024 at 3:53 PM psychedelicious @.***> wrote:

Can you please link to an affected LoRA so we have something to test with?

— Reply to this email directly, view it on GitHub https://github.com/invoke-ai/InvokeAI/issues/7092#issuecomment-2418031612, or unsubscribe https://github.com/notifications/unsubscribe-auth/A25PW2ZR7UOKI7YOSN6NQJDZ33N43AVCNFSM6AAAAABPXEEOUKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDIMJYGAZTCNRRGI . You are receiving this because you authored the thread.Message ID: @.***>