Closed Rkkss closed 1 year ago
make sure you have updated everything, If you actually created a locon make sure to install the extension related to it, that is all I can give you because I don't have more info
It's not a Locon. I ran update.bat & reinstall requirement.txt again but it didn't fix it
Here my json:
{ "base_model": "D:/Stable_difussion/stable-diffusion-webui/models/Stable-diffusion/chilloutmix_Ni.safetensors", "img_folder": "D:/Stable_difussion/Grabber/MPro", "output_folder": "D:/Stable_difussion/Grabber/Mtest", "save_json_folder": "D:/Stable_difussion/json1", "save_json_name": null, "load_json_path": null, "multi_run_folder": null, "reg_img_folder": null, "sample_prompts": null, "change_output_name": null, "json_load_skip_list": null, "training_comment": null, "save_json_only": false, "tag_occurrence_txt_file": true, "sort_tag_occurrence_alphabetically": false, "optimizer_type": "AdamW8bit", "optimizer_args": { "weight_decay": "0.1", "betas": "0.9,0.99" }, "scheduler": "cosine_with_restarts", "cosine_restarts": 1, "scheduler_power": 1, "learning_rate": 8e-05, "unet_lr": 8e-05, "text_encoder_lr": 1.5e-05, "warmup_lr_ratio": null, "unet_only": false, "net_dim": 128, "alpha": 64.0, "train_resolution": 512, "batch_size": 2, "clip_skip": 1, "test_seed": 23, "mixed_precision": "fp16", "save_precision": "fp16", "lyco": false, "network_args": null, "num_epochs": 5, "save_every_n_epochs": null, "save_n_epoch_ratio": null, "save_last_n_epochs": null, "max_steps": null, "sample_sampler": null, "sample_every_n_steps": null, "sample_every_n_epochs": null, "buckets": true, "min_bucket_resolution": 320, "max_bucket_resolution": 960, "bucket_reso_steps": null, "bucket_no_upscale": false, "shuffle_captions": true, "keep_tokens": 1, "xformers": true, "cache_latents": true, "flip_aug": false, "v2": false, "v_parameterization": false, "gradient_checkpointing": false, "gradient_acc_steps": null, "noise_offset": null, "mem_eff_attn": false, "lora_model_for_resume": null, "save_state": false, "resume": null, "text_only": false, "vae": null, "log_dir": null, "log_prefix": null, "tokenizer_cache_dir": null, "dataset_config": null, "lowram": false, "no_meta": false, "color_aug": false, "random_crop": false, "use_8bit_adam": false, "use_lion": false, "caption_dropout_rate": null, "caption_dropout_every_n_epochs": null, "caption_tag_dropout_rate": null, "prior_loss_weight": 1, "max_grad_norm": 1, "save_as": "safetensors", "caption_extension": ".txt", "max_clip_token_length": 150, "save_last_n_epochs_state": null, "num_workers": 8, "persistent_workers": true, "face_crop_aug_range": null, "network_module": "sd_scripts.networks.lora", "locon_dim": null, "locon_alpha": null, "locon": false, "list_of_json_to_run": null }
I just baked a test lora using the exact version I have released, it loaded fine on both the built in webui and additional networks extension, please make sure you have updated both webui and the additional networks extension if you are using it.
output of additional networks showing that it loaded the model properly
dimension: 8,
alpha: 1.0,
multiplier_unet: 1,
multiplier_tenc: 1
create LoCon for Text Encoder: 72 modules.
create LoCon for U-Net: 228 modules.
original forward/weights is backed up.
enable LoCon for text encoder
enable LoCon for U-Net
shapes for 0 weights are converted.
LoRA model steps_test(f2df5317eaea) loaded: <All keys matched successfully>
setting (or sd model) changed. new networks created.
When I load lora with AddNet, it seems to work just fine. But loading same lora with webui's builtin still gives the error
LoRA weight_unet: 1, weight_tenc: 1, model: last(0187a36deabf)
dimension: {128}, alpha: {64.0}, multiplier_unet: 1, multiplier_tenc: 1
create LoRA for Text Encoder: 72 modules.
create LoRA for U-Net: 228 modules.
enable LoRA for text encoder
enable LoRA for U-Net
shapes for 0 weights are converted.
LoRA model last(0187a36deabf) loaded: <All keys matched successfully>
setting (or sd model) changed. new networks created.
I don't have problem with loras that I dl'ed, only self trained ones.
Anyway, I installed a1111-sd-webui-locon extention and problem is fixed now. Weird
This is probably because kohya has implemented locon as a default feature, which webui no longer supports by default, webui will have to update to follow this change most likely.
I've trained 3 loras so far and every single one of them has this error, even on the model that I used to train on