KohakuBlueleaf / LyCORIS

Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.
Apache License 2.0
2.18k stars 148 forks source link

Flux LyCORIS config broken with version 3.0.1.dev5 #208

Closed mhirki closed 1 month ago

mhirki commented 1 month ago

Version 3.0.1.dev5 seems to have broken one of my earlier LyCORIS configurations:

{
    "algo": "lokr",
    "multiplier": 1.0,
    "linear_dim": 10000,
    "linear_alpha": 1,
    "factor": 16,
    "apply_preset": {
        "target_module": [
            "FluxTransformerBlock",
            "FluxSingleTransformerBlock"
        ],
        "module_algo_map": {
            "Attention": {
                "factor": 16
            },
            "FeedForward": {
                "factor": 8
            }
        }
    }
}

SimpleTuner is giving me this error:

2024-09-01 22:49:41|[LyCORIS]-INFO: Using rank adaptation algo: lokr
2024-09-01 22:49:41|[LyCORIS]-INFO: Use Dropout value: 0.0
2024-09-01 22:49:41|[LyCORIS]-INFO: Create LyCORIS Module
2024-09-01 22:49:41|[LyCORIS]-WARNING: Using bnb/quanto/optimum-quanto with LyCORIS will enable force-bypass mode.
2024-09-01 22:49:41|[LyCORIS]-WARNING: lora_dim 10000 is too large for dim=18432 and factor=16, using full matrix mode.
2024-09-01 22:49:41|[LyCORIS]-WARNING: lora_dim 10000 is too large for dim=3072 and factor=16, using full matrix mode.
2024-09-01 22:49:41|[LyCORIS]-WARNING: lora_dim 10000 is too large for dim=12288 and factor=8, using full matrix mode.
2024-09-01 22:49:41|[LyCORIS]-WARNING: lora_dim 10000 is too large for dim=9216 and factor=16, using full matrix mode.
2024-09-01 22:49:41|[LyCORIS]-WARNING: lora_dim 10000 is too large for dim=12288 and factor=16, using full matrix mode.
2024-09-01 22:49:41|[LyCORIS]-WARNING: lora_dim 10000 is too large for dim=15360 and factor=16, using full matrix mode.
2024-09-01 22:49:41|[LyCORIS]-INFO: create LyCORIS: 836 modules.
2024-09-01 22:49:41|[LyCORIS]-INFO: module type table: {'LokrModule': 836}
duplicated lora name: lycoris_transformer_blocks_0_attn_to_q
Traceback (most recent call last):
  File "/nvme/home/mikaelh/Stable_Diffusion/bghira/MySimpleTuner2/train.py", line 41, in <module>
    trainer.init_trainable_peft_adapter()
  File "/nvme/home/mikaelh/Stable_Diffusion/bghira/MySimpleTuner2/helpers/training/trainer.py", line 777, in init_trainable_peft_adapter
    self.lycoris_wrapped_network = create_lycoris(
                                   ^^^^^^^^^^^^^^^
  File "/nvme/home/mikaelh/Stable_Diffusion/bghira/SimpleTuner.latest/.venv/lib/python3.11/site-packages/lycoris/wrapper.py", line 101, in create_lycoris
    network = LycorisNetwork(
              ^^^^^^^^^^^^^^^
  File "/nvme/home/mikaelh/Stable_Diffusion/bghira/SimpleTuner.latest/.venv/lib/python3.11/site-packages/lycoris/wrapper.py", line 429, in __init__
    lora.lora_name not in names
AssertionError: duplicated lora name: lycoris_transformer_blocks_0_attn_to_q

A quick hack to wrapper.py gets rid of the error:

--- wrapper.py.bak      2024-09-01 23:22:24.549579834 +0300
+++ wrapper.py  2024-09-01 23:22:37.647056613 +0300
@@ -406,7 +406,7 @@
             module,
             list(set([
                 *LycorisNetwork.TARGET_REPLACE_MODULE,
-                *LycorisNetwork.MODULE_ALGO_MAP.keys(),
+                #*LycorisNetwork.MODULE_ALGO_MAP.keys(),
             ])),
             list(set([
                 *LycorisNetwork.TARGET_REPLACE_NAME,

But there's probably a better way to fix this.

KohakuBlueleaf commented 1 month ago

@AmericanPresidentJimmyCarter Check this

AmericanPresidentJimmyCarter commented 1 month ago

I will fix this tomorrow and add some tests with diffusers models.

AmericanPresidentJimmyCarter commented 1 month ago

Fixed by #209