The same configuration, training on an 8 GB GPU, can work fine, but when executing the script to fine-tune the text encoder with UNet, an error occurs.
The config file is as follows:
compute_environment: LOCAL_MACHINE
deepspeed_config:
gradient_accumulation_steps: 1
gradient_clipping: 1.0
offload_optimizer_device: cpu
offload_param_device: cpu
zero3_init_flag: true
zero_stage: 2
distributed_type: DEEPSPEED
downcast_bf16: 'no'
dynamo_config:
dynamo_backend: FX2TRT
machine_rank: 0
main_training_function: main
mixed_precision: fp16
num_machines: 1
num_processes: 4
rdzv_backend: static
same_network: true
tpu_env: []
tpu_use_cluster: false
tpu_use_sudo: false
use_cpu: false
The error message is as follows:
You can't use same Accelerator() instance with multiple models when using DeepSpeed
Reproduction
accelerate launch --config_file /mnt/data/huggingface/accelerate/default_config1.yaml train_dreambooth.py --pretrained_model_name_or_path=$MODEL_NAME --train_text_encoder --instance_data_dir=$INSTANCE_DIR --class_data_dir=$CLASS_DIR --output_dir=$OUTPUT_DIR --with_prior_preservation --prior_loss_weight=1.0 --instance_prompt="a photo of sks dog" --class_prompt="a photo of dog" --resolution=512 --train_batch_size=1 --use_8bit_adam --gradient_checkpointing --learning_rate=2e-6 --lr_scheduler="constant" --lr_warmup_steps=0 --num_class_images=200 --max_train_steps=20
Describe the bug
The same configuration, training on an 8 GB GPU, can work fine, but when executing the script to fine-tune the text encoder with UNet, an error occurs. The config file is as follows: compute_environment: LOCAL_MACHINE deepspeed_config: gradient_accumulation_steps: 1 gradient_clipping: 1.0 offload_optimizer_device: cpu offload_param_device: cpu zero3_init_flag: true zero_stage: 2 distributed_type: DEEPSPEED downcast_bf16: 'no' dynamo_config: dynamo_backend: FX2TRT machine_rank: 0 main_training_function: main mixed_precision: fp16 num_machines: 1 num_processes: 4 rdzv_backend: static same_network: true tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false The error message is as follows: You can't use same
Accelerator()
instance with multiple models when using DeepSpeedReproduction
accelerate launch --config_file /mnt/data/huggingface/accelerate/default_config1.yaml train_dreambooth.py --pretrained_model_name_or_path=$MODEL_NAME --train_text_encoder --instance_data_dir=$INSTANCE_DIR --class_data_dir=$CLASS_DIR --output_dir=$OUTPUT_DIR --with_prior_preservation --prior_loss_weight=1.0 --instance_prompt="a photo of sks dog" --class_prompt="a photo of dog" --resolution=512 --train_batch_size=1 --use_8bit_adam --gradient_checkpointing --learning_rate=2e-6 --lr_scheduler="constant" --lr_warmup_steps=0 --num_class_images=200 --max_train_steps=20
Logs
System Info
diffusers
version: 0.15.0.dev0