TheLastBen / fast-stable-diffusion

fast-stable-diffusion + DreamBooth
MIT License
7.41k stars 1.28k forks source link

fast-DreamBooth.ipynb - Fail training the UNet.. #2845

Open karl0ss opened 1 month ago

karl0ss commented 1 month ago
Training the UNet...
The config attributes {'force_upcast': True} were passed to AutoencoderKL, but are not expected and will be ignored. Please verify your config.json configuration file.
Traceback (most recent call last):
  File "/content/diffusers/examples/dreambooth/train_dreambooth.py", line 803, in <module>
    main()
  File "/content/diffusers/examples/dreambooth/train_dreambooth.py", line 512, in main
    vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae")
  File "/usr/local/lib/python3.10/dist-packages/diffusers/models/modeling_utils.py", line 558, in from_pretrained
    raise ValueError(
ValueError: Cannot load <class 'diffusers.models.autoencoder_kl.AutoencoderKL'> from /content/stable-diffusion-custom because the following keys are missing: 
 decoder.mid_block.attentions.0.query.weight, encoder.mid_block.attentions.0.value.bias, encoder.mid_block.attentions.0.value.weight, encoder.mid_block.attentions.0.query.bias, encoder.mid_block.attentions.0.key.weight, decoder.mid_block.attentions.0.query.bias, decoder.mid_block.attentions.0.key.weight, decoder.mid_block.attentions.0.proj_attn.weight, decoder.mid_block.attentions.0.value.weight, encoder.mid_block.attentions.0.query.weight, encoder.mid_block.attentions.0.key.bias, encoder.mid_block.attentions.0.proj_attn.weight, decoder.mid_block.attentions.0.key.bias, decoder.mid_block.attentions.0.proj_attn.bias, encoder.mid_block.attentions.0.proj_attn.bias, decoder.mid_block.attentions.0.value.bias. 
 Please make sure to pass `low_cpu_mem_usage=False` and `device_map=None` if you want to randomely initialize those weights or else make sure your checkpoint file is correct.
Traceback (most recent call last):
  File "/usr/local/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main
    args.func(args)
  File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 837, in launch_command
    simple_launcher(args)
  File "/usr/local/lib/python3.10/dist-packages/accelerate/commands/launch.py", line 354, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '/content/diffusers/examples/dreambooth/train_dreambooth.py', '--image_captions_filename', '--train_only_unet', '--save_starting_step=500', '--save_n_steps=500', '--Session_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/hudgellfamily', '--pretrained_model_name_or_path=/content/stable-diffusion-custom', '--instance_data_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/hudgellfamily/instance_images', '--output_dir=/content/models/hudgellfamily', '--captions_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/hudgellfamily/captions', '--instance_prompt=', '--seed=669654', '--resolution=512', '--mixed_precision=fp16', '--train_batch_size=1', '--gradient_accumulation_steps=1', '--use_8bit_adam', '--learning_rate=2e-06', '--lr_scheduler=linear', '--lr_warmup_steps=0', '--max_train_steps=1500']' returned non-zero exit status 1.
Something went wrong

anyone have any ideas?

karl0ss commented 1 month ago

Interesting, I have just retried with the build in model image and that seems to now be training

I wanted to use Realistic_Vision_V5.1_noVAE as the model, so added SG161222/Realistic_Vision_V5.1_noVAE into the field before, I guess its not downloading the model correctly even though it says DONE

TheLastBen commented 1 month ago

the model you're using has no vae a vae is required for training

karl0ss commented 1 month ago

Can I supply the suggested vae myself?

TheLastBen commented 1 month ago

you'll need to create a vae folder inside the custom model folder and put the right files in there

karl0ss commented 1 month ago

Nice one, I will give that a try today, thank you for all your work, and your response...