hiyouga / LLaMA-Factory

Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
35.37k stars 4.36k forks source link

ChatGLM2 全量微调后载入报错 #1541

Closed FoolMark closed 1 year ago

FoolMark commented 1 year ago

image

OUTPUT_DIR=/data/ChatGLM2-6B-CMB-v5
CUDA_VISIBLE_DEVICES=0 python \
src/cli_demo.py \
--model_name_or_path $OUTPUT_DIR \
--template $TEMPLATE \
hiyouga commented 1 year ago

https://github.com/hiyouga/LLaMA-Factory/issues/1307#issuecomment-1786558186