QwenLM / Qwen-VL

The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.
Other
4.88k stars 373 forks source link

[BUG] <title>启动api之后,如何使用图片构造请求,并获取模型结果 #393

Open ybshaw opened 4 months ago

ybshaw commented 4 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

使用python openai_api启动api接口服务之后,1、要如何构造请求来向目标url发送图片 2、目标url是/v1/chat/completions这个吗,代码里只有这个url是post请求。 1、例如在flask框架中,启动api之后,可以使用requests来请求

import requests
data = {
    "text": "描述这张图片",
    "image_context": {
        "image": "/xxx/xxx/example.jpg"
    }
}
response = requests.post("http://127.0.0.1:5000/chat", json=data)
response_json = response.json()
chat_response = response_json.get("response", "")
print(chat_response)

在fastapi框架中,要怎么构造请求

另:注意到目前大模型好像都是采用fastapi作为web框架,而不是传统的flask,这是有什么讲究吗

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- OS:
- Python:
- Transformers:
- PyTorch:
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`):

备注 | Anything else?

No response

linzm1007 commented 2 months ago

解决了没