lm-sys / FastChat

An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
Apache License 2.0
37.14k stars 4.57k forks source link

Fine Tuning trust_remote_code=True #1964

Open Minniesse opened 1 year ago

Minniesse commented 1 year ago

wandb disabled torchrun --nproc_per_node=4 --master_port=20001 train.py \ --model_name_or_path mosaicml/mpt-7b-chat \ --data_path FastChat/data/dummy_conversation.json \ --fp16 True \ --output_dir output_mpt \ --num_train_epochs 3 \ --per_device_train_batch_size 2 \ --per_device_eval_batch_size 2 \ --gradient_accumulation_steps 1 \ --evaluation_strategy "steps" \ --eval_steps 2 \ --save_strategy "steps" \ --save_steps 1200 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --tf32 False \ --model_max_length 2048 \ --gradient_checkpointing True \ --lazy_preprocess True

ValueError: Loading mosaicml/mpt-7b-chat requires you to execute the configuration file in that repo on your local machine. Make sure you have read the code there to avoid malicious use, then set the option trust_remote_code=True to remove this error. ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 1226441) of binary:

How can I fix it? @merrymercy

fozziethebeat commented 1 year ago

I locally made this change recently when trying to do the same thing. I think fastchat/train/train.py needs one of the argument sets to be expanded. Most likely TrainingArguments.

If i get around to it I'll tidy up a branch I have that fixes this and a few other issues I ran into when fine-tuning.

aceerez commented 1 year ago

Hi, is there a way around this error until it gets fixed? where should I add the trust_remote_code=True line exactly?

fozziethebeat commented 1 year ago

This goes in PretrainedModel.from_pretrained that is somewhere in fastchat/train/train_lora.py. This is a required flag for HuggingFace to load the MPT custom code.

winterpi commented 1 year ago

I've update fschat to the latest 0.2.29,then modify 2 lines of the fschat’s installing file “model/mode_adapter.py”, as shown in the picture:

1

It's OK now.