huggingface / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
https://huggingface.co/docs/diffusers
Apache License 2.0
26.32k stars 5.42k forks source link

Updated from 23.1 to 27.2 and generations are off. #7669

Closed JemiloII closed 7 months ago

JemiloII commented 7 months ago

Describe the bug

So I just been using the Abyss Orange Mix model from warrior in the diffusers file format. I get messages now about how the cliptextmodel isn't loading values anymore. Please keep in mind that it was loading them just fine before.

@sayakpaul Is this something new where things are intentionally not being loaded? How can I just force that? The LoRAs, man, they hardly do anything anymore. Doesn't matter what scale I set them to. It's like they are nonexistent. I saw some "LoRA Changes" but I don't understand why they would make my LoRAs no longer work.

This is how I load loras:

def load_loras(pipe, settings):
    active_adapters = pipe.get_active_adapters()
    set_adapters_hash = hash_dict(settings["lora"])
    set_loras = []
    set_weights = []
    if len(settings["lora"]) > 0:
        pipe.enable_lora()
        print(f"Checking if Loras settings has changed...)")
        print(f"Stored: {getattr(pipe, 'set_adapters_hash', None)}")
        print(f"Current: {set_adapters_hash}")
        if getattr(pipe, 'set_adapters_hash', None) == set_adapters_hash:
            print("Loras settings has not changed")
            return 'Loras settings has not changed'

        pipe.unfuse_lora()

        for lora in settings["lora"]:
            file_name = lora["file_name"] or lora["name"]
            adapter_name = file_name.replace(".", "")
            if file_name not in active_adapters:
                print(f"Loading Lora: {file_name}")
                try:
                    pipe.load_lora_weights(
                        f"./assets/lora/{file_name}.safetensors",
                        weight_name=f"{file_name}.safetensors",
                        adapter_name=adapter_name,
                    )
                except:
                    print("Probably loaded already")

                set_loras.append(file_name)
                set_weights.append(lora["weight"])
            else:
                print(f"Lora: {file_name} already loaded")
                set_loras.append(file_name)
                set_weights.append(lora["weight"])

        pipe.unfuse_lora()
        pipe.set_adapters(set_loras, set_weights)
        pipe.set_adapters_hash = set_adapters_hash
        pipe.fuse_lora()
    else:
        pipe.disable_lora()

I feel like I'll just need to revert back to 23.1. Just feels like every time I update, things just break. I remember I was stuck on 18.1 for the longest time because of LoRAs and prompting being broken.

Reproduction

Load WarriorMomma's Abyss Orange Mix Model in diffusers format and some loras. at least 4. Load easy negative textual embedding. Notice how on 23.1 there aren't weird errors about the model not ignoring cliptext settings, note how loras will work with the scale given.

Logs

python main3.py
Shibiko Init
Loading pipeline components...:   0%|                                                                          | 0/6 [00:00<?, ?it/s]D
:\diffusion-ai\.venv\lib\site-packages\torch\_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future
and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly.  To access Untyp
edStorage directly, use tensor.untyped_storage() instead of tensor.storage()
  return self.fget.__get__(instance, owner)()
Some weights of the model checkpoint at ./assets/models/AOM3\text_encoder were not used when initializing CLIPTextModel: ['text_model.
encoder.layers.10.layer_norm1.bias', 'text_model.encoder.layers.10.layer_norm1.weight', 'text_model.encoder.layers.10.layer_norm2.bias
', 'text_model.encoder.layers.10.layer_norm2.weight', 'text_model.encoder.layers.10.mlp.fc1.bias', 'text_model.encoder.layers.10.mlp.f
c1.weight', 'text_model.encoder.layers.10.mlp.fc2.bias', 'text_model.encoder.layers.10.mlp.fc2.weight', 'text_model.encoder.layers.10.
self_attn.k_proj.bias', 'text_model.encoder.layers.10.self_attn.k_proj.weight', 'text_model.encoder.layers.10.self_attn.out_proj.bias'
, 'text_model.encoder.layers.10.self_attn.out_proj.weight', 'text_model.encoder.layers.10.self_attn.q_proj.bias', 'text_model.encoder.
layers.10.self_attn.q_proj.weight', 'text_model.encoder.layers.10.self_attn.v_proj.bias', 'text_model.encoder.layers.10.self_attn.v_pr
oj.weight', 'text_model.encoder.layers.11.layer_norm1.bias', 'text_model.encoder.layers.11.layer_norm1.weight', 'text_model.encoder.la
yers.11.layer_norm2.bias', 'text_model.encoder.layers.11.layer_norm2.weight', 'text_model.encoder.layers.11.mlp.fc1.bias', 'text_model
.encoder.layers.11.mlp.fc1.weight', 'text_model.encoder.layers.11.mlp.fc2.bias', 'text_model.encoder.layers.11.mlp.fc2.weight', 'text_
model.encoder.layers.11.self_attn.k_proj.bias', 'text_model.encoder.layers.11.self_attn.k_proj.weight', 'text_model.encoder.layers.11.
self_attn.out_proj.bias', 'text_model.encoder.layers.11.self_attn.out_proj.weight', 'text_model.encoder.layers.11.self_attn.q_proj.bia
s', 'text_model.encoder.layers.11.self_attn.q_proj.weight', 'text_model.encoder.layers.11.self_attn.v_proj.bias', 'text_model.encoder.
layers.11.self_attn.v_proj.weight']
- This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another archit
ecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (i
nitializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Loading pipeline components...: 100%|██████████████████████████████████████████████████████████████████| 6/6 [00:01<00:00,  5.82it/s]
D:\diffusion-ai\.venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion.py:186: FutureWarning: The conf
iguration file of this scheduler: EulerAncestralDiscreteScheduler {
  "_class_name": "EulerAncestralDiscreteScheduler",
  "_diffusers_version": "0.27.2",
  "beta_end": 0.01,
  "beta_schedule": "linear",
  "beta_start": 0.001775,
  "num_train_timesteps": 975,
  "prediction_type": "epsilon",
  "rescale_betas_zero_snr": false,
  "steps_offset": 0,
  "timestep_spacing": "linspace",
  "trained_betas": null
}
 is outdated. `steps_offset` should be set to 1 instead of 0. Please make sure to update the config accordingly as leaving `steps_offs
et` might led to incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be v
ery nice if you could open a Pull request for the `scheduler/scheduler_config.json` file
  deprecate("steps_offset!=1", "1.0.0", deprecation_message, standard_warn=False)
Loading Textual Inversion: easynegative
Loading Textual Inversion: bad_prompt_version2
Loading Textual Inversion: badhandv4
Loading Textual Inversion: negative_hand-neg
Loading Waifu2x model: art on GPU: 1
Using cache found in C:\Users\Shibiko/.cache\torch\hub\nagadomi_nunif_dev
65280
Server started at port 3006
Received: type: <class 'dict'>: {'type': 'create', 'token': 'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJkaXNwbGF5TmFtZSI6IlNoaWJpa28iLCJl
bWFpbCI6ImluZm9Ac2hpYmlrby5haSIsImd1aWQiOiI1OTQ2ZjBiOS04OTQ1LTRiZjItYjFmMy05OGIyYjQ3MzBiODkiLCJ0aWVyIjoiZnJlZSIsImlzVmVyaWZpZWQiOmZhbH
NlLCJpYXQiOjE3MTMwODgzMzEsImV4cCI6MTcxMzY5MzEzMX0.wkP9DA9XUySywS-7QFULwjTegVP0RE6XBDaJFtixpL4', 'image': {'seed': 1808409544983192, 'p
rompt': 'masterpiece, perfect lighting, front lighting, anmnr, highres, 1girl, cute girl, (loraeyes:1.1), (sliced bob:1.3), (light blu
e hair:1.2), red eyes, (pink shirt:1.2), arms behind back, jean shorts,', 'negative_prompt': '(easyneggative:1.2), (bad_prompt_version
_2:1.2), (badhandv4:1.2), (negative_hand-neg:1.2), (low quality:1.4), (worst quality:1.4), (signature:1.2), bad anatomy, bad hands, ba
d feet, bad face, anatomical nonsense, lowres, mutated hands, photorealistic, extra hands, fat, futa, hands on face, nsfw, nude, jeans
, collar, (black), (tank top), wife beater, (sleeveless:1.2), skirt, dress, dark background, dark shadows, tucked in, ponytail,', 'wid
th': 512, 'height': 768, 'guidance_scale': 8, 'num_inference_steps': 43, 'clip_skip': 2}, 'scheduler': {'name': 'euler', 'beta_start':
 0.0001775, 'beta_end': 0.012, 'beta_schedule': 'linear', 'num_train_timesteps': 1000, 'prediction_type': 'epsilon', 'steps_offset': 1
, 'timestep_spacing': 'linspace', 'noise_sampler_seed': 0, 'use_karras_sigmas': True}, 'lora': [{'display_name': 'Thicker Lines Anime
Style LoRA Mix', 'file_name': 'thickline_fp16', 'weight': 0.5, 'trigger_words': None, 'nsfw': False}, {'display_name': 'Anime Style Lo
RA with Better Flat Color v1.0 for AOM3A1_orangemixs', 'file_name': 'anmnr01AOM3A1', 'weight': 0.4, 'tags': 'anime, style, low-file si
ze, art style, anime girl, flat color', 'trigger_words': 'anmnr', 'nsfw': False}, {'display_name': 'Squeezer LoRA', 'file_name': 'Sque
ezer2', 'weight': -0.1, 'tags': 'anime, character', 'trigger_words': '', 'nsfw': False}, {'display_name': 'Eye - LoRa Eyes_V1.0', 'fil
e_name': 'Loraeyes_V1', 'weight': 0.6, 'tags': 'anime, eyes, reflection', 'trigger_words': 'loraeyes', 'nsfw': False}, {'display_name'
: 'Detail Tweaker LoRA', 'file_name': 'add_detail', 'weight': 0.25, 'tags': None, 'trigger_words': '', 'nsfw': False}, {'display_name'
: 'All-in-one Hairstyle Bundle v1.0', 'file_name': 'n15g_aio_hairstyles-1.0', 'weight': 0.5, 'tags': 'style, hairstyle, bundle, all-in
-one', 'trigger_words': 'big hair, crown braid, double bun, floofy bob, hair over one eye, hime style, longtail bob, low ponytail, meg
a ponytail, mega side ponytail, mega twin drills, mega twintails, ojou curls, one side up, single braid, sliced bob, twin braids, two
side up', 'nsfw': False}, {'display_name': 'Better Legwears offset2.0', 'file_name': 'BlackStocking2', 'weight': 0.5, 'tags': 'anime,
girl, sexy, style, beautiful, fantasy', 'trigger_words': 'legs,pantyhose, wet clothes, public tattoo', 'nsfw': False}], 'plugins': {'a
fter_detailer': {'enabled': True, 'mask_dilation': 4, 'mask_blur': 4, 'mask_blur_type': 'gaussian', 'mask_padding': 32, 'fp16': True,
'fuse': True, 'mask_image_enhance': {'brightness': 1, 'contrast': 1, 'color': 1, 'sharpness': 1}}, 'lut': {'enabled': True, 'file_name
': 'color-balance', 'name': 'Color Balance'}, 'remove_background': {'enabled': False}, 'upscaler': {'enabled': False, 'scale': 2}, 'wa
ifu2x': {'enabled': True, 'noise_level': 3, 'scale': 1}}, 'experimental': {'auto_loras': False, 'auto_prompt': False, 'auto_negative_p
rompt': False, 'auto_trigger_words': True, 'enable_pixelated_loader': False}, 'uuid': 'f74c183b-f429-4c9e-a51b-8e1ad7dab9e5', 'user':
{'displayName': 'Shibiko', 'guid': '5946f0b9-8945-4bf2-b1f3-98b2b4730b89'}}
Creating
Shibiko Create
Checking if Loras settings has changed...)
Stored: None
Current: 9c17e56b67301668cba81e57187b57558b245acb10bdb823f923b0c0e8e000da
Loading Lora: thickline_fp16
Loading adapter weights from None led to unexpected keys not found in the model:  ['text_model.encoder.layers.10.mlp.fc1.lora_A.thickl
ine_fp16.weight', 'text_model.encoder.layers.10.mlp.fc1.lora_B.thickline_fp16.weight', 'text_model.encoder.layers.10.mlp.fc2.lora_A.th
ickline_fp16.weight', 'text_model.encoder.layers.10.mlp.fc2.lora_B.thickline_fp16.weight', 'text_model.encoder.layers.10.self_attn.k_p
roj.lora_A.thickline_fp16.weight', 'text_model.encoder.layers.10.self_attn.k_proj.lora_B.thickline_fp16.weight', 'text_model.encoder.l
ayers.10.self_attn.out_proj.lora_A.thickline_fp16.weight', 'text_model.encoder.layers.10.self_attn.out_proj.lora_B.thickline_fp16.weig
ht', 'text_model.encoder.layers.10.self_attn.q_proj.lora_A.thickline_fp16.weight', 'text_model.encoder.layers.10.self_attn.q_proj.lora
_B.thickline_fp16.weight', 'text_model.encoder.layers.10.self_attn.v_proj.lora_A.thickline_fp16.weight', 'text_model.encoder.layers.10
.self_attn.v_proj.lora_B.thickline_fp16.weight', 'text_model.encoder.layers.11.mlp.fc1.lora_A.thickline_fp16.weight', 'text_model.enco
der.layers.11.mlp.fc1.lora_B.thickline_fp16.weight', 'text_model.encoder.layers.11.mlp.fc2.lora_A.thickline_fp16.weight', 'text_model.
encoder.layers.11.mlp.fc2.lora_B.thickline_fp16.weight', 'text_model.encoder.layers.11.self_attn.k_proj.lora_A.thickline_fp16.weight',
 'text_model.encoder.layers.11.self_attn.k_proj.lora_B.thickline_fp16.weight', 'text_model.encoder.layers.11.self_attn.out_proj.lora_A
.thickline_fp16.weight', 'text_model.encoder.layers.11.self_attn.out_proj.lora_B.thickline_fp16.weight', 'text_model.encoder.layers.11
.self_attn.q_proj.lora_A.thickline_fp16.weight', 'text_model.encoder.layers.11.self_attn.q_proj.lora_B.thickline_fp16.weight', 'text_m
odel.encoder.layers.11.self_attn.v_proj.lora_A.thickline_fp16.weight', 'text_model.encoder.layers.11.self_attn.v_proj.lora_B.thickline
_fp16.weight'].
Loading Lora: anmnr01AOM3A1
Loading adapter weights from None led to unexpected keys not found in the model:  ['text_model.encoder.layers.10.mlp.fc1.lora_A.anmnr0
1AOM3A1.weight', 'text_model.encoder.layers.10.mlp.fc1.lora_B.anmnr01AOM3A1.weight', 'text_model.encoder.layers.10.mlp.fc2.lora_A.anmn
r01AOM3A1.weight', 'text_model.encoder.layers.10.mlp.fc2.lora_B.anmnr01AOM3A1.weight', 'text_model.encoder.layers.10.self_attn.k_proj.
lora_A.anmnr01AOM3A1.weight', 'text_model.encoder.layers.10.self_attn.k_proj.lora_B.anmnr01AOM3A1.weight', 'text_model.encoder.layers.
10.self_attn.out_proj.lora_A.anmnr01AOM3A1.weight', 'text_model.encoder.layers.10.self_attn.out_proj.lora_B.anmnr01AOM3A1.weight', 'te
xt_model.encoder.layers.10.self_attn.q_proj.lora_A.anmnr01AOM3A1.weight', 'text_model.encoder.layers.10.self_attn.q_proj.lora_B.anmnr0
1AOM3A1.weight', 'text_model.encoder.layers.10.self_attn.v_proj.lora_A.anmnr01AOM3A1.weight', 'text_model.encoder.layers.10.self_attn.
v_proj.lora_B.anmnr01AOM3A1.weight', 'text_model.encoder.layers.11.mlp.fc1.lora_A.anmnr01AOM3A1.weight', 'text_model.encoder.layers.11
.mlp.fc1.lora_B.anmnr01AOM3A1.weight', 'text_model.encoder.layers.11.mlp.fc2.lora_A.anmnr01AOM3A1.weight', 'text_model.encoder.layers.
11.mlp.fc2.lora_B.anmnr01AOM3A1.weight', 'text_model.encoder.layers.11.self_attn.k_proj.lora_A.anmnr01AOM3A1.weight', 'text_model.enco
der.layers.11.self_attn.k_proj.lora_B.anmnr01AOM3A1.weight', 'text_model.encoder.layers.11.self_attn.out_proj.lora_A.anmnr01AOM3A1.wei
ght', 'text_model.encoder.layers.11.self_attn.out_proj.lora_B.anmnr01AOM3A1.weight', 'text_model.encoder.layers.11.self_attn.q_proj.lo
ra_A.anmnr01AOM3A1.weight', 'text_model.encoder.layers.11.self_attn.q_proj.lora_B.anmnr01AOM3A1.weight', 'text_model.encoder.layers.11
.self_attn.v_proj.lora_A.anmnr01AOM3A1.weight', 'text_model.encoder.layers.11.self_attn.v_proj.lora_B.anmnr01AOM3A1.weight'].
Loading Lora: Squeezer2
Loading adapter weights from None led to unexpected keys not found in the model:  ['text_model.encoder.layers.10.mlp.fc1.lora_A.Squeez
er2.weight', 'text_model.encoder.layers.10.mlp.fc1.lora_B.Squeezer2.weight', 'text_model.encoder.layers.10.mlp.fc2.lora_A.Squeezer2.we
ight', 'text_model.encoder.layers.10.mlp.fc2.lora_B.Squeezer2.weight', 'text_model.encoder.layers.10.self_attn.k_proj.lora_A.Squeezer2
.weight', 'text_model.encoder.layers.10.self_attn.k_proj.lora_B.Squeezer2.weight', 'text_model.encoder.layers.10.self_attn.out_proj.lo
ra_A.Squeezer2.weight', 'text_model.encoder.layers.10.self_attn.out_proj.lora_B.Squeezer2.weight', 'text_model.encoder.layers.10.self_
attn.q_proj.lora_A.Squeezer2.weight', 'text_model.encoder.layers.10.self_attn.q_proj.lora_B.Squeezer2.weight', 'text_model.encoder.lay
ers.10.self_attn.v_proj.lora_A.Squeezer2.weight', 'text_model.encoder.layers.10.self_attn.v_proj.lora_B.Squeezer2.weight', 'text_model
.encoder.layers.11.mlp.fc1.lora_A.Squeezer2.weight', 'text_model.encoder.layers.11.mlp.fc1.lora_B.Squeezer2.weight', 'text_model.encod
er.layers.11.mlp.fc2.lora_A.Squeezer2.weight', 'text_model.encoder.layers.11.mlp.fc2.lora_B.Squeezer2.weight', 'text_model.encoder.lay
ers.11.self_attn.k_proj.lora_A.Squeezer2.weight', 'text_model.encoder.layers.11.self_attn.k_proj.lora_B.Squeezer2.weight', 'text_model
.encoder.layers.11.self_attn.out_proj.lora_A.Squeezer2.weight', 'text_model.encoder.layers.11.self_attn.out_proj.lora_B.Squeezer2.weig
ht', 'text_model.encoder.layers.11.self_attn.q_proj.lora_A.Squeezer2.weight', 'text_model.encoder.layers.11.self_attn.q_proj.lora_B.Sq
ueezer2.weight', 'text_model.encoder.layers.11.self_attn.v_proj.lora_A.Squeezer2.weight', 'text_model.encoder.layers.11.self_attn.v_pr
oj.lora_B.Squeezer2.weight'].
Loading Lora: Loraeyes_V1
Loading adapter weights from None led to unexpected keys not found in the model:  ['text_model.encoder.layers.10.mlp.fc1.lora_A.Loraey
es_V1.weight', 'text_model.encoder.layers.10.mlp.fc1.lora_B.Loraeyes_V1.weight', 'text_model.encoder.layers.10.mlp.fc2.lora_A.Loraeyes
_V1.weight', 'text_model.encoder.layers.10.mlp.fc2.lora_B.Loraeyes_V1.weight', 'text_model.encoder.layers.10.self_attn.k_proj.lora_A.L
oraeyes_V1.weight', 'text_model.encoder.layers.10.self_attn.k_proj.lora_B.Loraeyes_V1.weight', 'text_model.encoder.layers.10.self_attn
.out_proj.lora_A.Loraeyes_V1.weight', 'text_model.encoder.layers.10.self_attn.out_proj.lora_B.Loraeyes_V1.weight', 'text_model.encoder
.layers.10.self_attn.q_proj.lora_A.Loraeyes_V1.weight', 'text_model.encoder.layers.10.self_attn.q_proj.lora_B.Loraeyes_V1.weight', 'te
xt_model.encoder.layers.10.self_attn.v_proj.lora_A.Loraeyes_V1.weight', 'text_model.encoder.layers.10.self_attn.v_proj.lora_B.Loraeyes
_V1.weight', 'text_model.encoder.layers.11.mlp.fc1.lora_A.Loraeyes_V1.weight', 'text_model.encoder.layers.11.mlp.fc1.lora_B.Loraeyes_V
1.weight', 'text_model.encoder.layers.11.mlp.fc2.lora_A.Loraeyes_V1.weight', 'text_model.encoder.layers.11.mlp.fc2.lora_B.Loraeyes_V1.
weight', 'text_model.encoder.layers.11.self_attn.k_proj.lora_A.Loraeyes_V1.weight', 'text_model.encoder.layers.11.self_attn.k_proj.lor
a_B.Loraeyes_V1.weight', 'text_model.encoder.layers.11.self_attn.out_proj.lora_A.Loraeyes_V1.weight', 'text_model.encoder.layers.11.se
lf_attn.out_proj.lora_B.Loraeyes_V1.weight', 'text_model.encoder.layers.11.self_attn.q_proj.lora_A.Loraeyes_V1.weight', 'text_model.en
coder.layers.11.self_attn.q_proj.lora_B.Loraeyes_V1.weight', 'text_model.encoder.layers.11.self_attn.v_proj.lora_A.Loraeyes_V1.weight'
, 'text_model.encoder.layers.11.self_attn.v_proj.lora_B.Loraeyes_V1.weight'].
Loading Lora: add_detail
Loading adapter weights from None led to unexpected keys not found in the model:  ['text_model.encoder.layers.10.mlp.fc1.lora_A.add_de
tail.weight', 'text_model.encoder.layers.10.mlp.fc1.lora_B.add_detail.weight', 'text_model.encoder.layers.10.mlp.fc2.lora_A.add_detail
.weight', 'text_model.encoder.layers.10.mlp.fc2.lora_B.add_detail.weight', 'text_model.encoder.layers.10.self_attn.k_proj.lora_A.add_d
etail.weight', 'text_model.encoder.layers.10.self_attn.k_proj.lora_B.add_detail.weight', 'text_model.encoder.layers.10.self_attn.out_p
roj.lora_A.add_detail.weight', 'text_model.encoder.layers.10.self_attn.out_proj.lora_B.add_detail.weight', 'text_model.encoder.layers.
10.self_attn.q_proj.lora_A.add_detail.weight', 'text_model.encoder.layers.10.self_attn.q_proj.lora_B.add_detail.weight', 'text_model.e
ncoder.layers.10.self_attn.v_proj.lora_A.add_detail.weight', 'text_model.encoder.layers.10.self_attn.v_proj.lora_B.add_detail.weight',
 'text_model.encoder.layers.11.mlp.fc1.lora_A.add_detail.weight', 'text_model.encoder.layers.11.mlp.fc1.lora_B.add_detail.weight', 'te
xt_model.encoder.layers.11.mlp.fc2.lora_A.add_detail.weight', 'text_model.encoder.layers.11.mlp.fc2.lora_B.add_detail.weight', 'text_m
odel.encoder.layers.11.self_attn.k_proj.lora_A.add_detail.weight', 'text_model.encoder.layers.11.self_attn.k_proj.lora_B.add_detail.we
ight', 'text_model.encoder.layers.11.self_attn.out_proj.lora_A.add_detail.weight', 'text_model.encoder.layers.11.self_attn.out_proj.lo
ra_B.add_detail.weight', 'text_model.encoder.layers.11.self_attn.q_proj.lora_A.add_detail.weight', 'text_model.encoder.layers.11.self_
attn.q_proj.lora_B.add_detail.weight', 'text_model.encoder.layers.11.self_attn.v_proj.lora_A.add_detail.weight', 'text_model.encoder.l
ayers.11.self_attn.v_proj.lora_B.add_detail.weight'].
Loading Lora: n15g_aio_hairstyles-1.0
Loading adapter weights from None led to unexpected keys not found in the model:  ['text_model.encoder.layers.10.mlp.fc1.lora_A.n15g_a
io_hairstyles-10.weight', 'text_model.encoder.layers.10.mlp.fc1.lora_B.n15g_aio_hairstyles-10.weight', 'text_model.encoder.layers.10.m
lp.fc2.lora_A.n15g_aio_hairstyles-10.weight', 'text_model.encoder.layers.10.mlp.fc2.lora_B.n15g_aio_hairstyles-10.weight', 'text_model
.encoder.layers.10.self_attn.k_proj.lora_A.n15g_aio_hairstyles-10.weight', 'text_model.encoder.layers.10.self_attn.k_proj.lora_B.n15g_
aio_hairstyles-10.weight', 'text_model.encoder.layers.10.self_attn.out_proj.lora_A.n15g_aio_hairstyles-10.weight', 'text_model.encoder
.layers.10.self_attn.out_proj.lora_B.n15g_aio_hairstyles-10.weight', 'text_model.encoder.layers.10.self_attn.q_proj.lora_A.n15g_aio_ha
irstyles-10.weight', 'text_model.encoder.layers.10.self_attn.q_proj.lora_B.n15g_aio_hairstyles-10.weight', 'text_model.encoder.layers.
10.self_attn.v_proj.lora_A.n15g_aio_hairstyles-10.weight', 'text_model.encoder.layers.10.self_attn.v_proj.lora_B.n15g_aio_hairstyles-1
0.weight', 'text_model.encoder.layers.11.mlp.fc1.lora_A.n15g_aio_hairstyles-10.weight', 'text_model.encoder.layers.11.mlp.fc1.lora_B.n
15g_aio_hairstyles-10.weight', 'text_model.encoder.layers.11.mlp.fc2.lora_A.n15g_aio_hairstyles-10.weight', 'text_model.encoder.layers
.11.mlp.fc2.lora_B.n15g_aio_hairstyles-10.weight', 'text_model.encoder.layers.11.self_attn.k_proj.lora_A.n15g_aio_hairstyles-10.weight
', 'text_model.encoder.layers.11.self_attn.k_proj.lora_B.n15g_aio_hairstyles-10.weight', 'text_model.encoder.layers.11.self_attn.out_p
roj.lora_A.n15g_aio_hairstyles-10.weight', 'text_model.encoder.layers.11.self_attn.out_proj.lora_B.n15g_aio_hairstyles-10.weight', 'te
xt_model.encoder.layers.11.self_attn.q_proj.lora_A.n15g_aio_hairstyles-10.weight', 'text_model.encoder.layers.11.self_attn.q_proj.lora
_B.n15g_aio_hairstyles-10.weight', 'text_model.encoder.layers.11.self_attn.v_proj.lora_A.n15g_aio_hairstyles-10.weight', 'text_model.e
ncoder.layers.11.self_attn.v_proj.lora_B.n15g_aio_hairstyles-10.weight'].
Loading Lora: BlackStocking2
Loading adapter weights from None led to unexpected keys not found in the model:  ['text_model.encoder.layers.10.mlp.fc1.lora_A.BlackS
tocking2.weight', 'text_model.encoder.layers.10.mlp.fc1.lora_B.BlackStocking2.weight', 'text_model.encoder.layers.10.mlp.fc2.lora_A.Bl
ackStocking2.weight', 'text_model.encoder.layers.10.mlp.fc2.lora_B.BlackStocking2.weight', 'text_model.encoder.layers.10.self_attn.k_p
roj.lora_A.BlackStocking2.weight', 'text_model.encoder.layers.10.self_attn.k_proj.lora_B.BlackStocking2.weight', 'text_model.encoder.l
ayers.10.self_attn.out_proj.lora_A.BlackStocking2.weight', 'text_model.encoder.layers.10.self_attn.out_proj.lora_B.BlackStocking2.weig
ht', 'text_model.encoder.layers.10.self_attn.q_proj.lora_A.BlackStocking2.weight', 'text_model.encoder.layers.10.self_attn.q_proj.lora
_B.BlackStocking2.weight', 'text_model.encoder.layers.10.self_attn.v_proj.lora_A.BlackStocking2.weight', 'text_model.encoder.layers.10
.self_attn.v_proj.lora_B.BlackStocking2.weight', 'text_model.encoder.layers.11.mlp.fc1.lora_A.BlackStocking2.weight', 'text_model.enco
der.layers.11.mlp.fc1.lora_B.BlackStocking2.weight', 'text_model.encoder.layers.11.mlp.fc2.lora_A.BlackStocking2.weight', 'text_model.
encoder.layers.11.mlp.fc2.lora_B.BlackStocking2.weight', 'text_model.encoder.layers.11.self_attn.k_proj.lora_A.BlackStocking2.weight',
 'text_model.encoder.layers.11.self_attn.k_proj.lora_B.BlackStocking2.weight', 'text_model.encoder.layers.11.self_attn.out_proj.lora_A
.BlackStocking2.weight', 'text_model.encoder.layers.11.self_attn.out_proj.lora_B.BlackStocking2.weight', 'text_model.encoder.layers.11
.self_attn.q_proj.lora_A.BlackStocking2.weight', 'text_model.encoder.layers.11.self_attn.q_proj.lora_B.BlackStocking2.weight', 'text_m
odel.encoder.layers.11.self_attn.v_proj.lora_A.BlackStocking2.weight', 'text_model.encoder.layers.11.self_attn.v_proj.lora_B.BlackStoc
king2.weight'].
D:\diffusion-ai\.venv\lib\site-packages\peft\tuners\lora\layer.py:517: UserWarning: Already unmerged. Nothing to do.
  warnings.warn("Already unmerged. Nothing to do.")
D:\diffusion-ai\.venv\lib\site-packages\peft\tuners\lora\layer.py:256: UserWarning: Already unmerged. Nothing to do.
  warnings.warn("Already unmerged. Nothing to do.")
device cuda:1

Creating Shibiko image...
100%|████████████████████████████████████████████████████████████████████████████████████████████████| 43/43 [00:03<00:00, 12.49it/s]

0: 640x448 1 face, 87.5ms
Speed: 5.0ms preprocess, 87.5ms inference, 53.5ms postprocess per image at shape (1, 3, 640, 448)
100%|████████████████████████████████████████████████████████████████████████████████████████████████| 17/17 [00:01<00:00, 12.11it/s]
Applying LUT...
Waifu2x Upscaling image with method:noise noise_level:3 scale:1x...
profile=False
Saved image to D:/diffusion/outputs/2024-04-14_07-19-55.png
Saved preview.
Generation Timestamp or Error: 2024-04-14_07-19-55
Sending created response

System Info

Window 10 Diffusers 27.2 4090

Who can help?

No response

sayakpaul commented 7 months ago

Please provide an end-to-end and minimal reproducible code snippet.

JemiloII commented 7 months ago

My code is all split between many files. I could always add you to the repo.

On another note, I decided to check out things a bit more. So I use the diffusers format for the main model, but I tried swapping it out for the safetensors version. Same contents. The safetensors loads as expected and doesn't conflict with loras. Little too tired to dive in more on why this is the case. I always wanted to just use the safetensors style anyways. So I guess this works out.

sayakpaul commented 7 months ago

Cool. I will close this issue then. If and when you have minimal and fully reproducible code snippet, feel free to re-open the issue and we will take it from there.