Closed ymourenya closed 7 months ago
lr=2e-4 lora_rank=8 lora_alpha=32 lora_trainable="q_proj,v_proj,k_proj,o_proj,gate_proj,down_proj,up_proj" modules_to_save="embed_tokens,lm_head" lora_dropout=0.05
pretrained_model=./llama2 chinese_tokenizer_path=./llama2 dataset_dir=./data data_cache=1 per_device_train_batch_size=16 gradient_accumulation_steps=8 block_size=512 output_dir=output_dir
deepspeed_config_file=ds_zero2_no_offload.json
torchrun --nnodes 1 --nproc_per_node 2 run_clm_pt_with_peft.py \ --deepspeed ${deepspeed_config_file} \ --model_name_or_path ${pretrained_model} \ --tokenizer_name_or_path ${chinese_tokenizer_path} \ --dataset_dir ${dataset_dir} \ --data_cache_dir ${data_cache} \ --validation_split_percentage 0.001 \ --per_device_train_batch_size ${per_device_train_batch_size} \ --do_train \ --seed $RANDOM \ --fp16 \ --num_train_epochs 1 \ --lr_scheduler_type cosine \ --learning_rate ${lr} \ --warmup_ratio 0.05 \ --weight_decay 0.01 \ --logging_strategy steps \ --logging_steps 10 \ --save_strategy epoch \ --save_total_limit 1 \ --gradient_accumulation_steps ${gradient_accumulation_steps} \ --preprocessing_num_workers 8 \ --block_size ${block_size} \ --output_dir ${output_dir} \ --overwrite_output_dir \ --ddp_timeout 30000 \ --logging_first_step True \ --lora_rank ${lora_rank} \ --lora_alpha ${lora_alpha} \ --trainable ${lora_trainable} \ --lora_dropout ${lora_dropout} \ --modules_to_save ${modules_to_save} \ --torch_dtype float16 \ --load_in_kbits 16 \ --save_safetensors True \ --gradient_checkpointing \ --ddp_find_unused_parameters False
已收到,感谢来信。祝好!
你好,代码我没有改动任何地方,是哪里有什么问题吗,为什么预训练出来的lora模块才48B
已收到,感谢来信。祝好!
你好,能问一下这是什么问题吗
pt_lora_model
下权重正常吗?
pt_lora_model
下权重正常吗?
也是这样
你好,代码我没有改动任何地方,是哪里有什么问题吗,为什么预训练出来的lora模块才48B
兄弟,这个坑在这里,你只需要在训练脚本中注释如下代码即可正常:
# old_state_dict = model.state_dict
# model.state_dict = (
# lambda self, *_, **__: get_peft_model_state_dict(self, old_state_dict())
# ).__get__(model, type(model))
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your consideration.
Closing the issue, since no updates observed. Feel free to re-open if you need any further assistance.
提交前必须检查以下项目
问题类型
模型训练与精调
基础模型
Chinese-LLaMA-2-16K (7B/13B)
操作系统
None
详细描述问题
依赖情况(代码类问题务必提供)
运行日志或截图
你好,为预训练出来的lora为什么才48B,是没保存吗,用的你们预训练的代码,这大小不对把