hiyouga / LLaMA-Factory

Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
34.21k stars 4.21k forks source link

奖励模型vllm部署后,请求的时候怎么获取返回的score数值? #4834

Closed yaopanyaopan closed 4 months ago

yaopanyaopan commented 4 months ago

我按照:python3.9 -m vllm.entrypoints.openai.api_server --dtype auto --api-key $api_key --model $model_name_or_path --tensor-parallel-size 8部署qwen2训练的reward模型后,请求接口返回的是回复内容,怎么设置参数才能只获取reward的打分呢?

hiyouga commented 4 months ago

vllm 不支持奖励模型

fuqiang-benz commented 3 months ago

vllm 不支持奖励模型

你好,我使用rm训练方式,用的官方数据dpo_zh_demo, 想问下训练好的模型,推理时如何输出打分呢,推理时的输入是什么,可以给个示例吗,感谢感谢

hiyouga commented 3 months ago

llamafactory-cli api --model_name_or_path xxx --template xx --stage rm

fuqiang-benz commented 3 months ago

llamafactory-cli api --model_name_or_path xxx --template xx --stage rm

你好,我按照这个命令把api起来了 下面是启动api信息

07/18/2024 01:33:01 - INFO - llamafactory.model.model_utils.attention - Using vanilla attention implementation.
07/18/2024 01:33:01 - INFO - llamafactory.model.model_utils.valuehead - Provided path (/home/LLM/llm_models/Qwen-14B-Chat) does not contain value head weights: /home/LLM/llm_models/Qwen-14B-Chat does not appear to have a file named value_head.bin. Checkout 'https://huggingface.co//home/LLM/llm_models/Qwen-14B-Chat/tree/None' for available files..
07/18/2024 01:33:01 - INFO - llamafactory.model.model_utils.valuehead - Ignore the above message if you are not resuming the training of a value head model.
07/18/2024 01:33:02 - INFO - llamafactory.model.loader - all params: 14,167,296,001
Visit http://localhost:8121/docs for API document.
INFO:     Started server process [3262387]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:8121 (Press CTRL+C to quit)

这是什么情况啊,麻烦给帮忙看看 以下是 调用接口的报错:

    raise self._make_status_error_from_response(err.response) from None
openai.APIStatusError: Error code: 405 - {'detail': 'Not allowed'}

这是调用代码

import os
from openai import OpenAI
from transformers.utils.versions import require_version
require_version("openai>=1.5.0", "To fix: pip install openai>=1.5.0")
if __name__ == '__main__':
    # change to your custom port
    url = "http:/xxxx:{}/v1".format(os.environ.get("API_PORT", 8761))
    print('url',url)
    port = 8761
    client = OpenAI(
        api_key="0",
        base_url="http://xxxx:{}/v1/score/evaluation".format(os.environ.get("API_PORT", 8761)),
    )
    messages = []
    messages.append({"role": "user", "content": "hello, where is USA"})
    result = client.chat.completions.create(messages=messages, model="test")
    print(result.choices[0].message)
fuqiang-benz commented 3 months ago

@hiyouga 你好,能帮忙看下这个问题吗

rover5056 commented 2 weeks ago

@hiyouga 你好,能帮忙看下这个问题吗

请问你搞定了吗。遇到同样的问题,可以起服务,但是不知道怎么请求。。。

hiyouga commented 2 weeks ago

@rover5056 see http://localhost:8000/docs

rover5056 commented 2 weeks ago

@rover5056 see http://localhost:8000/docs

@hiyouga Hi ,我使用这个命令启动,然后接口反应都是正常的~ API_PORT=8000 CUDA_VISIBLE_DEVICES=0 llamafactory-cli api examples/inference/qwen2_vl.yaml --stage rm

但是,这里有个很 confused 的点是,/docs 下的文档里写的 messages 需要是 string,但是这个 string 具体应该长什么样呀。 chat.completions的 dumps string么?

image

尤其是对于 VL 模型,图片是 base64 传进去么,还是需要通过什么格式,试了很久都一直报错 422~ 非常感谢您的回复!! 之前使用chat_completion 的话是类似这样,都是list-> dict~: response = client.chat.completions.create( model=model_name, messages=[{ 'role': 'user', 'content': [{ 'type': 'text', 'text': '请用详细准确的描述这张图片', }, { 'type': 'image_url', 'image_url': { 'url': f'data:image/jpeg;base64,{encoded_image}', }, }] }] )