hiyouga / LLaMA-Factory

Unified Efficient Fine-Tuning of 100+ LLMs (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
34.27k stars 4.22k forks source link

qwen2vl 7b , how to call api #5416

Closed xddun closed 2 months ago

xddun commented 2 months ago

Reminder

System Info

docker:

transformers 4.45.0.dev0 torch 2.4.0 llamafactory 0.9.0 /app

Reproduction

vim examples/inference/sft_xd_seal.yaml

model_name_or_path: /xiedong/Qwen2-VL-7B-Instruct
adapter_name_or_path: output/saves/qwen2_vl-7b/lora/sft_xd
template: qwen2_vl
finetuning_type: lora

run:

llamafactory-cli api examples/inference/sft_xd_seal.yaml

I don't know how to call this API. My fine-tuned dataset looks like this.

[
  {
    "messages": [
      {
      "content": "你是一个擅长识别印章上文字的助手,输出json字符串给用户。",
      "role": "system"
      },
      {
        "content": "<image>识别图片里红色印章上的公司名称或单位名称(印章主文字)。",
        "role": "user"
      },
      {
        "content": "{\"印章主文字\": \"饮酒太原近似收益有限公司\"}",
        "role": "assistant"
      }
    ],
    "images": [
      "/xiedong/yinzhang/save_dst/010155.jpg"
    ]
  }
]

Expected behavior

I would like to have an example of calling the API, where I can pass in an image and text to query the large model and get a response from the multimodal model.

Others

No response

xddun commented 2 months ago

model is right:

image

xddun commented 2 months ago

The IP/docs contain the following information, but I still don't know how to pass in an image.

Previously, I was able to use the language model by passing in text to experiment, but now this is a multimodal model, and I want to pass in an image.

ChatGPT told me to use tools, but there is no available tutorial explaining how I should build this tools.

{ "model": "string", "messages": [ { "role": "user", "content": "string", "tool_calls": [ { "id": "string", "type": "function", "function": { "name": "string", "arguments": "string" } } ] } ], "tools": [ { "type": "function", "function": { "name": "string", "description": "string", "parameters": {} } } ], "do_sample": true, "temperature": 0, "top_p": 0, "n": 1, "max_tokens": 0, "stop": "string", "stream": false }

hiyouga commented 2 months ago

For api calling, please refer to the OpenAI document. We use the same protocol.

curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": "What'\''s in this image?"
          },
          {
            "type": "image_url",
            "image_url": {
              "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
            }
          }
        ]
      }
    ],
    "max_tokens": 300
  }'
HuichiZhou commented 2 weeks ago

For api calling, please refer to the OpenAI document. We use the same protocol.

curl https://api.openai.com/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -d '{
    "model": "gpt-4o",
    "messages": [
      {
        "role": "user",
        "content": [
          {
            "type": "text",
            "text": "What'\''s in this image?"
          },
          {
            "type": "image_url",
            "image_url": {
              "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
            }
          }
        ]
      }
    ],
    "max_tokens": 300
  }'

Could you please give me a python script which can support qwen video inference?