ShivamShrirao / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
https://huggingface.co/docs/diffusers
Apache License 2.0
1.89k stars 505 forks source link

Inpainting Dreambooth bug after training #184

Closed cinjon closed 1 year ago

cinjon commented 1 year ago

Describe the bug

I'm getting a bug after training a model during sampling:

Here's my command:

!accelerate launch train_inpainting_dreambooth.py \
  --pretrained_model_name_or_path="runwayml/stable-diffusion-inpainting" \
  --pretrained_vae_name_or_path="stabilityai/sd-vae-ft-mse" \
  --output_dir=$OUTPUT_DIR \
  --with_prior_preservation --prior_loss_weight=1.0 \
  --seed=1337 \
  --resolution=512 \
  --train_batch_size=2 \
  --train_text_encoder \
  --learning_rate=1e-6 \
  --mixed_precision="fp16" \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --num_class_images=300 \
  --sample_batch_size=4 \
  --max_train_steps=1200 \
  --n_save_sample=0 \
  --save_infer_steps=35 \
  --not_cache_latents \
  --hflip \
  --concepts_list="concepts_list.json"

And then after training completes, I get this problem when generating the samples:

Generating samples:   0% 0/4 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "train_inpainting_dreambooth.py", line 876, in <module>
    main(args)
  File "train_inpainting_dreambooth.py", line 869, in main
    save_weights(global_step)
  File "train_inpainting_dreambooth.py", line 758, in save_weights
    images = pipeline(
  File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py", line 675, in __call__
    mask, masked_image_latents = self.prepare_mask_latents(
  File "/usr/local/lib/python3.8/dist-packages/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_inpaint.py", line 534, in prepare_mask_latents
    masked_image_latents = self.vae.encode(masked_image).latent_dist.sample(generator=generator)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/vae.py", line 570, in encode
    h = self.encoder(x)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/diffusers/models/vae.py", line 130, in forward
    sample = self.conv_in(sample)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 463, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (c10::Half) and bias type (float) should be the same

Reproduction

No response

Logs

No response

System Info

Colab:

ShivamShrirao commented 1 year ago

Fixed in 68a9bd8427796a86ad7671be309231dee838d434