tatsu-lab / stanford_alpaca

Code and documentation to train Stanford's Alpaca models, and generate the data.
https://crfm.stanford.edu/2023/03/13/alpaca.html
Apache License 2.0
29.39k stars 4.03k forks source link

SFT Mistral; #317

Open feiying12343 opened 3 months ago

feiying12343 commented 3 months ago

torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

feiying12343 commented 3 months ago

torchrun --nproc_per_node=4 --master_port=8085 train.py --model_name_or_path ../models_hub/Mistral-7B-v0.3 --data_path ./alpaca_data.json --bf16 True --output_dir output --num_train_epochs 3 --per_device_train_batch_size 4 --per_device_eval_batch_size 4 --gradient_accumulation_steps 8 --evaluation_strategy "no" --save_strategy "steps" --save_steps 2000 --save_total_limit 1 --learning_rate 2e-5 --weight_decay 0. --warmup_ratio 0.03 --lr_scheduler_type "cosine" --logging_steps 1 --fsdp "full_shard auto_wrap" --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' --tf32 True