ShivamShrirao / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
https://huggingface.co/docs/diffusers
Apache License 2.0
1.89k stars 505 forks source link

Dreambooth Training not reading instance data #204

Open asahi0130 opened 1 year ago

asahi0130 commented 1 year ago

Describe the bug

I am trying to run the Dreambooth Training on my local device from the line command.

However, this gave me a hard time due to the trained weights not generating images like the instance data (pictures of my face). I was trying to troubleshoot this and figured out that the sample images are the same no matter what instance data I put. This means that the program is not correctly reading my instance data. (I tried to use different instance data but it generated the same sample data images)

I tried the same command on Google collab and it worked perfectly fine.

Summary:

Dreambooth training on the local device (line command) not reading the instance data images.

Any help would be appreciated.

Thank you

Reproduction

  1. Download this github repo.
  2. Put instance images into ./examples/dreambooth/ALface
  3. Change instance_data_dir to the above directory (./ALface) in the script (launch.sh)
  4. Check the sample images after the script runs.

Logs

No response

System Info

Platform: Ubuntu 22.04 Python Version: Python 3.10.6 Diffusers: 0.13.0.dev0

EDIT: Additional Info:

The command I run:

export MODEL_NAME="runwayml/stable-diffusion-v1-5"
export OUTPUT_DIR="./ckptLaunch/ALface"

accelerate launch train_dreambooth.py \
  --pretrained_model_name_or_path=$MODEL_NAME \
  --pretrained_vae_name_or_path="stabilityai/sd-vae-ft-mse" \
  --output_dir=$OUTPUT_DIR \
  --revision="fp16" \
  --with_prior_preservation --prior_loss_weight=1.0 \
  --seed=1337 \
  --use_8bit_adam \
  --resolution=512 \
  --train_batch_size=1 \
  --train_text_encoder \
  --mixed_precision="fp16" \
  --gradient_accumulation_steps=1 \
  --learning_rate=1e-6 \
  --lr_scheduler="constant" \
  --lr_warmup_steps=0 \
  --num_class_images=50 \
  --sample_batch_size=4 \
  --max_train_steps=800 \
  --save_interval=200 \
  --save_sample_prompt="ALface person" \
  --concepts_list="concepts_list.json"

Things tried:

Prompt when generating image: "Photo of ALface person"

Google Colab Result: 202311

Local result: 2 1

RELATED: https://github.com/ShivamShrirao/diffusers/issues/194#issue-1552307523

adammenges commented 1 year ago

Just ran into this myself today, too

The sample images from this: https://github.com/ShivamShrirao/diffusers/blob/main/examples/dreambooth/DreamBooth_Stable_Diffusion.ipynb

Look like this:

image

Instead of:

alvan-nee-9M0tSjb-cpA-unsplash

asahi0130 commented 1 year ago

FIXED: Installing xformers 0.0.17.dev442

Worked.

adammenges commented 1 year ago

Oh that's great. Where did you find 0.0.17.dev442? I don't see it in their repo or in pip.

Ah, got it installed. :)

cantrell commented 1 year ago

This worked for me, too. I could train on the 768 model, but not 512. Updating xformers fixed the problem for me.

meanna commented 1 year ago

I have the same problem and xformers==0.0.17.dev442 is not available anymore.

update: xformers==0.0.17.dev447 works for me