Open will-wiki opened 1 month ago
@yl-jiang 试过了一样的报错,而且这俩ip的设置是一个意思吧
openai_api_base = "http://127.0.0.1:6050/v1"
model="/data/CodeSpace/models/Qwen2-VL-7B-Instruct"
@Uhao-P 修改openai_api_base = "http://127.0.0.1:6050/v1" 一样的报错,这个不管localhost、0.0.0.0和127.0.0.1应该一个意思吧,服务请求都接受到了
client.chat.completions.create(
model="/data/CodeSpace/models/Qwen2-VL-7B-Instruct")
这个不是填的模型类型?修改后报下面的错误
openai.NotFoundError: Error code: 404 - {'object': 'error', 'message': 'The model /data/CodeSpace/models/Qwen2-VL-7B-Instruct
does not exist.', 'type': 'NotFoundError', 'param': None, 'code': 404}
Hey all - this has been fixed on the main branch of vllm, and we're going to make a release some time in October.
For now the options you have are:
transformers
via pip install git+https://github.com/huggingface/transformers@21fac7abba2a37fae86106f87fcf9974fd1e3830
想问下,vllm部署vlm模型,推理报错openai.BadRequestError: Error code: 400 - {'object': 'error', 'message': "Cannot connect to host modelscope.oss-cn-beijing.aliyuncs.com:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)')]", 'type': 'BadRequestError', 'param': None, 'code': 400}这个该怎么设置么
vllm部署方法
请求代码