Support Qwen2 models.
Only docs are updated, since the pipeline requirements are the same as Qwen1.5.
Pipeline Tests
Full-Finetune
2 LoRA
Known Issue
Note: This isn't a bug in LMFlow, but we will add a logger notification ASAP to notify users when they might trigger this bug.
When do lora (or other peft tuning that uses peft library) first and saved model at a dir, say, A, and then do another finetuning work that also specifies the same output_dir A, pipeline will fail to update the model card, since Qwen2ForCausalLM doesn't have attribute .create_or_update_model_card(). But this will not affect the model saving.
Bug logi:
Finetune with PeftTrainer and save leads to a modelcard with library_name = 'peft'.
Description
Support Qwen2 models. Only docs are updated, since the pipeline requirements are the same as Qwen1.5.
Pipeline Tests
2 LoRA
Known Issue
When do lora (or other peft tuning that uses
peft
library) first and saved model at a dir, say, A, and then do another finetuning work that also specifies the sameoutput_dir
A, pipeline will fail to update the model card, sinceQwen2ForCausalLM
doesn't have attribute.create_or_update_model_card()
. But this will not affect the model saving.Bug logi:
PeftTrainer
and save leads to a modelcard withlibrary_name = 'peft'
.output_dir
, since https://github.com/huggingface/transformers/blob/bdf36dcd48106a4a0278ed7f3cc26cd65ab7b066/src/transformers/trainer.py#L4114,os.path.exists(model_card_filepath)
isTrue
, and thenis_peft_library
isTrue
..create_or_update_model_card()
, whichQwen2ForCausalLM
doesn't have.We strongly recommend to use different
output_dir
for every finetuning work to avoid unexpected issues like the one above.