Closed HemalPatil closed 3 months ago
You need to fix. Refer here: https://github.com/huggingface/diffusers/issues/6552.
Hi @HemalPatil. Did you get to try out the changes from https://github.com/huggingface/diffusers/issues/6552 in the concerned script?
Well given that I'm a noob in AI and haven't found the time lately, no I haven't tried it out. Maybe over this weekend.
@HemalPatil , to make it work, you need to add --mixed_precision="fp16" in args of your script (train_text_to_image_lora.py). This was suggested in the following comment https://github.com/huggingface/diffusers/issues/6363#issuecomment-1870761866
Example:
accelerate launch --mixed_precision="fp16" train_text_to_image_lora.py \
--mixed_precision="fp16" \
--pretrained_model_name_or_path=$MODEL_NAME \
--dataset_name=$DATASET_NAME --caption_column="text" \
--resolution=512 --random_flip \
--train_batch_size=1 \
--num_train_epochs=100 --checkpointing_steps=5000 \
--learning_rate=1e-04 --lr_scheduler="constant" --lr_warmup_steps=0 \
--seed=42 \
--output_dir=${OUTPUT_DIR} \
--validation_prompt="a menacing skull with sunglasses." --report_to="wandb"
cc @sayakpaul
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
@jcRisch Many thanks, this works for me for this issue!
Describe the bug
Tried following the LoRA training example from huggingface tutorials using the editable install. Failed.
Reproduction
Default accelerate config:
Launch script:
Logs
System Info
diffusers
version: 0.27.0.dev0 Platform: Linux-6.5.0-25-generic-x86_64-with-glibc2.35 OS: Ubuntu 22.04 Python version: 3.10.12 PyTorch version (GPU?): 2.2.1+cu121 (True) Huggingface_hub version: 0.21.4 Transformers version: 4.38.2 Accelerate version: 0.28.0 xFormers version: not installed Using GPU in script?: Nvidia RTX 3060 Laptop 6GB (GA106M) Using distributed or parallel set-up in script?: NOWho can help?
@sayakpaul