aigc-apps / sd-webui-EasyPhoto

📷 EasyPhoto | Your Smart AI Photo Generator.
Apache License 2.0
4.95k stars 390 forks source link

Specific model can't be used #294

Open drphero opened 10 months ago

drphero commented 10 months ago

I have had great success using your extension to train faces for several models. But there is one that can't seem to be used for training. It can be found here https://civitai.com/models/142364?modelVersionId=157856. I tried converting it to fp16, but that didn't make a difference.

The error is as follows:

Traceback (most recent call last): File "C:\sd.webui\webui\extensions\sd-webui-EasyPhoto\scripts\train_kohya\train_lora.py", line 1478, in main() File "C:\sd.webui\webui\extensions\sd-webui-EasyPhoto\scripts\train_kohya\utils\gpu_info.py", line 195, in wrapper result = func(*args, **kwargs) File "C:\sd.webui\webui\extensions\sd-webui-EasyPhoto\scripts\train_kohya\train_lora.py", line 861, in main text_encoder, vae, unet = load_models_from_stable_diffusion_checkpoint(False, args.pretrained_model_ckpt) File "C:\sd.webui\webui\extensions\sd-webui-EasyPhoto\scripts\train_kohya\utils\model_utils.py", line 842, in load_models_from_stable_diffusion_checkpoint converted_vae_checkpoint = convert_ldm_vae_checkpoint(state_dict, vae_config) File "C:\sd.webui\webui\extensions\sd-webui-EasyPhoto\scripts\train_kohya\utils\model_utils.py", line 370, in convert_ldm_vae_checkpoint new_checkpoint["encoder.conv_in.weight"] = vae_state_dict["encoder.conv_in.weight"] KeyError: 'encoder.conv_in.weight' Traceback (most recent call last): File "runpy.py", line 196, in _run_module_as_main File "runpy.py", line 86, in _run_code File "C:\sd.webui\system\python\lib\site-packages\accelerate\commands\launch.py", line 989, in main() File "C:\sd.webui\system\python\lib\site-packages\accelerate\commands\launch.py", line 985, in main launch_command(args) File "C:\sd.webui\system\python\lib\site-packages\accelerate\commands\launch.py", line 979, in launch_command simple_launcher(args) File "C:\sd.webui\system\python\lib\site-packages\accelerate\commands\launch.py", line 628, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['C:\sd.webui\system\python\python.exe', 'C:\sd.webui\webui\extensions\sd-webui-EasyPhoto\scripts\train_kohya/train_lora.py', '--pretrained_model_name_or_path=extensions\sd-webui-EasyPhoto\models\stable-diffusion-v1-5', '--pretrained_model_ckpt=models\Stable-diffusion\lxlmodel_v10_fp16.safetensors', '--train_data_dir=outputs\easyphoto-user-id-infos\swah-lxlmodel\processed_images', '--caption_column=text', '--resolution=512', '--random_flip', '--train_batch_size=1', '--gradient_accumulation_steps=4', '--dataloader_num_workers=0', '--max_train_steps=800', '--checkpointing_steps=100', '--learning_rate=0.0001', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--train_text_encoder', '--seed=42', '--rank=128', '--network_alpha=64', '--validation_prompt=easyphoto_face, easyphoto, 1person', '--validation_steps=100', '--output_dir=outputs\easyphoto-user-id-infos\swah-lxlmodel\user_weights', '--logging_dir=outputs\easyphoto-user-id-infos\swah-lxlmodel\user_weights', '--enable_xformers_memory_efficient_attention', '--mixed_precision=fp16', '--template_dir=extensions\sd-webui-EasyPhoto\models\training_templates', '--template_mask', '--merge_best_lora_based_face_id', '--merge_best_lora_name=swah-lxlmodel', '--cache_log_file=C:\sd.webui\webui\outputs/easyphoto-tmp/train_kohya_log.txt', '--validation']' returned non-zero exit status 1. Error executing the command: Command '['C:\sd.webui\system\python\python.exe', '-m', 'accelerate.commands.launch', '--mixed_precision=fp16', '--main_process_port=3456', 'C:\sd.webui\webui\extensions\sd-webui-EasyPhoto\scripts\train_kohya/train_lora.py', '--pretrained_model_name_or_path=extensions\sd-webui-EasyPhoto\models\stable-diffusion-v1-5', '--pretrained_model_ckpt=models\Stable-diffusion\lxlmodel_v10_fp16.safetensors', '--train_data_dir=outputs\easyphoto-user-id-infos\swah-lxlmodel\processed_images', '--caption_column=text', '--resolution=512', '--random_flip', '--train_batch_size=1', '--gradient_accumulation_steps=4', '--dataloader_num_workers=0', '--max_train_steps=800', '--checkpointing_steps=100', '--learning_rate=0.0001', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--train_text_encoder', '--seed=42', '--rank=128', '--network_alpha=64', '--validation_prompt=easyphoto_face, easyphoto, 1person', '--validation_steps=100', '--output_dir=outputs\easyphoto-user-id-infos\swah-lxlmodel\user_weights', '--logging_dir=outputs\easyphoto-user-id-infos\swah-lxlmodel\user_weights', '--enable_xformers_memory_efficient_attention', '--mixed_precision=fp16', '--template_dir=extensions\sd-webui-EasyPhoto\models\training_templates', '--template_mask', '--merge_best_lora_based_face_id', '--merge_best_lora_name=swah-lxlmodel', '--cache_log_file=C:\sd.webui\webui\outputs/easyphoto-tmp/train_kohya_log.txt', '--validation']' returned non-zero exit status 1. Using already loaded model lxlmodel_v10_fp16.safetensors [00a41efe34]: done in 1.1s (send model to device: 1.1s)

wuziheng commented 10 months ago

Thank you for your use and efforts. Regarding this model, it seems that this model cannot be trained using 'kohya,' and it appears that some keys are not aligned. You can try using 'kohya' training to address this issue. At the same time, we are also upgrading the training code to support higher versions of diffusers, but it may not necessarily resolve your specific problem.

new_checkpoint["encoder.conv_in.weight"] = vae_state_dict["encoder.conv_in.weight"] KeyError: 'encoder.conv_in.weight'"