xusenlinzy / api-for-open-llm

Openai style api for open large language models, using LLMs just as chatgpt! Support for LLaMA, LLaMA-2, BLOOM, Falcon, Baichuan, Qwen, Xverse, SqlCoder, CodeLLaMA, ChatGLM, ChatGLM2, ChatGLM3 etc. 开源大模型的统一后端接口
Apache License 2.0
2.16k stars 252 forks source link

使用 ADAPTER_MODEL_PATH 加载 QLoRA 微调的 ChatGLM3 模型失败 #200

Closed Yuanye-F closed 6 months ago

Yuanye-F commented 6 months ago

提交前必须检查以下项目 | The following items must be checked before submission

问题类型 | Type of problem

模型推理和部署 | Model inference and deployment

操作系统 | Operating system

Linux

详细描述问题 | Detailed description of the problem

.env 如下

# 启动端口
PORT=8051

# model 命名
MODEL_NAME=chatglm3
# 将MODEL_PATH改为我们的chatglm3模型所在的文件夹路径
MODEL_PATH=/Algorithm/LLM/ChatGLM3/weights/chatglm3-6b
ADAPTER_MODEL_PATH=/Algorithm/LLM/LLaMA-Factory/saves/ChatGLM3-6B-Chat/lora/self_cognition
# PROMPT_NAME=chatglm3

# device related
# GPU设备并行化策略
# DEVICE_MAP=auto
# GPU数量
NUM_GPUs=1
# GPU序号
GPUS='1'

# vllm related
# 开启半精度,可以加快运行速度、减少GPU占用
DTYPE=half

# api related
# API前缀
API_PREFIX=/v1

# API_KEY,此处随意填一个字符串即可
OPENAI_API_KEY='EMPTY'

Dependencies

peft                          0.6.2
sentence-transformers         2.2.2
torch                         2.0.1
torchvision                   0.15.2
transformers                  4.33.2
transformers-stream-generator 0.0.4

运行日志或截图 | Runtime logs or screenshots

Traceback (most recent call last):
  File "/Algorithm/LLM/Baichuan2/api-for-open-llm/server.py", line 2, in <module>
    from api.models import app, EMBEDDED_MODEL, GENERATE_ENGINE
  File "/Algorithm/LLM/Baichuan2/api-for-open-llm/api/models.py", line 142, in <module>
    GENERATE_ENGINE = create_generate_model()
  File "/Algorithm/LLM/Baichuan2/api-for-open-llm/api/models.py", line 48, in create_generate_model
    model, tokenizer = load_model(
  File "/Algorithm/LLM/Baichuan2/api-for-open-llm/api/adapter/model.py", line 316, in load_model
    model, tokenizer = adapter.load_model(
  File "/Algorithm/LLM/Baichuan2/api-for-open-llm/api/adapter/model.py", line 69, in load_model
    tokenizer = self.tokenizer_class.from_pretrained(
  File "/home/zp/.conda/envs/baichuan2/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 723, in from_pretrained
    return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
  File "/home/zp/.conda/envs/baichuan2/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1854, in from_pretrained
    return cls._from_pretrained(
  File "/home/zp/.conda/envs/baichuan2/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2017, in _from_pretrained
    tokenizer = cls(*init_inputs, **init_kwargs)
  File "/home/zp/.cache/huggingface/modules/transformers_modules/self_cognition_gy_train_2023-12-13-10-24-44/tokenization_chatglm.py", line 93, in __init__
    super().__init__(padding_side=padding_side, clean_up_tokenization_spaces=clean_up_tokenization_spaces, **kwargs)
  File "/home/zp/.conda/envs/baichuan2/lib/python3.10/site-packages/transformers/tokenization_utils.py", line 347, in __init__
    super().__init__(**kwargs)
  File "/home/zp/.conda/envs/baichuan2/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 1561, in __init__
    super().__init__(**kwargs)
  File "/home/zp/.conda/envs/baichuan2/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 847, in __init__
    setattr(self, key, value)
AttributeError: can't set attribute 'eos_token'
xusenlinzy commented 6 months ago

可能是transformers的版本问题

Yuanye-F commented 6 months ago

https://github.com/hiyouga/LLaMA-Factory/issues/1307