ShivamShrirao / diffusers

🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
https://huggingface.co/docs/diffusers
Apache License 2.0
1.89k stars 505 forks source link

Fail at Caching latents #187

Open GreenTeaBD opened 1 year ago

GreenTeaBD commented 1 year ago

Describe the bug

Ubuntu WSL and Windows Fails to train in the same way in both, fails at Caching latents

Reproduction

No response

Logs

/home/ckg/anaconda3/envs/diffusers/lib/python3.10/site-packages/diffusers/configuration_utils.py:195: FutureWarning: It is deprecated to pass a pretrained model name or path to `from_config`.If you were trying to load a scheduler, please use <class 'diffusers.schedulers.scheduling_ddpm.DDPMScheduler'>.from_pretrained(...) instead. Otherwise, please make sure to pass a configuration dictionary instead. This functionality will be removed in v1.0.0.
  deprecate("config-passed-as-path", "1.0.0", deprecation_message, standard_warn=False)
Caching latents:   0%|                                                                              | 0/199 [00:00<?, ?it/s]
Traceback (most recent call last):
  File "/home/ckg/github/diffusers/examples/dreambooth/train_dreambooth.py", line 822, in <module>
    main(args)
  File "/home/ckg/github/diffusers/examples/dreambooth/train_dreambooth.py", line 613, in main
    for batch in tqdm(train_dataloader, desc="Caching latents"):
  File "/home/ckg/anaconda3/envs/diffusers/lib/python3.10/site-packages/tqdm/std.py", line 1195, in __iter__
    for obj in iterable:
  File "/home/ckg/anaconda3/envs/diffusers/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 628, in __next__
    data = self._next_data()
  File "/home/ckg/anaconda3/envs/diffusers/lib/python3.10/site-packages/torch/utils/data/dataloader.py", line 671, in _next_data
    data = self._dataset_fetcher.fetch(index)  # may raise StopIteration
  File "/home/ckg/anaconda3/envs/diffusers/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 58, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/ckg/anaconda3/envs/diffusers/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 58, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/ckg/github/diffusers/examples/dreambooth/train_dreambooth.py", line 322, in __getitem__
    instance_path, instance_prompt = self.instance_images_path[index % self.num_instance_images]
ZeroDivisionError: integer division or modulo by zero
Traceback (most recent call last):
  File "/home/ckg/anaconda3/envs/diffusers/bin/accelerate", line 8, in <module>
    sys.exit(main())
  File "/home/ckg/anaconda3/envs/diffusers/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main
    args.func(args)
  File "/home/ckg/anaconda3/envs/diffusers/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1104, in launch_command
    simple_launcher(args)
  File "/home/ckg/anaconda3/envs/diffusers/lib/python3.10/site-packages/accelerate/commands/launch.py", line 567, in simple_launcher
    raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/home/ckg/anaconda3/envs/diffusers/bin/python', 'train_dreambooth.py', '--pretrained_model_name_or_path=runwayml/stable-diffusion-v1-5', '--instance_data_dir=training', '--class_data_dir=classes', '--output_dir=model', '--with_prior_preservation', '--prior_loss_weight=1.0', '--instance_prompt=skscody', '--class_prompt=a photo of person', '--seed=1337', '--resolution=512', '--train_batch_size=1', '--gradient_accumulation_steps=1', '--gradient_checkpointing', '--learning_rate=5e-6', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--num_class_images=200', '--sample_batch_size=1', '--max_train_steps=1000']' returned non-zero exit status 1.

System Info

GreenTeaBD commented 1 year ago

Same thing with Ubuntu 22.10 (not WSL) Cuda 11.6 python=3.10 pytorch1.12.1 torchvision 0.13.1 torchaudio 0.12.1 cudatoolkit 11.6, triton, xformers

Running with;

export MODEL_NAME="CompVis/stable-diffusion-v1-4" export INSTANCE_DIR="training" export CLASS_DIR="classes" export OUTPUT_DIR="model"

accelerate launch train_dreambooth.py \ --pretrained_model_name_or_path=$MODEL_NAME \ --instance_data_dir=$INSTANCE_DIR \ --class_data_dir=$CLASS_DIR \ --output_dir=$OUTPUT_DIR \ --with_prior_preservation --prior_loss_weight=1.0 \ --instance_prompt="skscody" \ --class_prompt="a photo of person" \ --seed=1337 \ --resolution=512 \ --train_batch_size=1 \ --gradient_accumulation_steps=1 --gradient_checkpointing \ --learning_rate=5e-6 \ --lr_scheduler="constant" \ --lr_warmup_steps=0 \ --num_class_images=200 \ --sample_batch_size=1 \ --max_train_steps=1000 \ --mixed_precision=fp16

Edit: And in the colab, so I am either insane/breaking something that should be obvious or somethings broken

0xPetra commented 1 year ago

https://github.com/ShivamShrirao/diffusers/issues/159#issuecomment-1344621003

Check your images are being uploaded. I think they have to be size: 512x512