-
### Your current environment
```text
The output of `python collect_env.py`
```
### 🐛 Describe the bug
python -m vllm.entrypoints.openai.api_server --model /root/autodl-tmp/models/Qwen1.5-14B-Ch…
-
Hello,
Is there a way to use qwen API https://github.com/QwenLM/Qwen/blob/main/openai_api.py on the cpp model?
-
If I want to test the qwen model with the API, can I just use the GPTAPI class and replace the model URL with the qwen one?
-
# ===== 对话 AI 设置 =====
# 使用的 bot 类型,目前支持 chatgptapi, glm, gemini, langchain, qwen, doubao, moonshot, yi, llama,
bot: chatgptapi
官网的调用参考如下:
from openai import OpenAI
client = OpenAI(
api_…
-
Hi all, this issue will track the feature requests you've made to TensorRT-LLM & provide a place to see what TRT-LLM is currently working on.
Last update: `Jan 14th, 2024`
🚀 = in development
#…
-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue]…
-
请问大佬,后续有没有计划支持调用讯飞或者kimi API?
-
Hi there,
I am attempting to utilize the qwen2-72b-instruct (API) for a continuation text task; however, the output lacks punctuation despite my inclusion of the directive "you should add necessary p…
-
More docs:
qwen2-vl: https://github.com/modelscope/ms-swift/blob/main/docs/source/Multi-Modal/qwen2-vl%E6%9C%80%E4%BD%B3%E5%AE%9E%E8%B7%B5.md
qwen1.5: https://github.com/modelscope/ms-swift/blob…
-
I am getting this error:
ValueError: Attempted to load model 'llava_hf', but no model for this name found! Supported model names: llava, qwen_vl, fuyu, batch_gpt4, gpt4v, instructblip, minicpm_v, c…