Open ImagineL opened 8 months ago
Hi @ImagineL ,
How are you serving your qwen_turbo
model?
In general, our plan is to support any model, as long as they can be wrapped into an openai-compatible endpoint.
There are several tools that seem to offer a good wrapper for local models with an open-ai compatible endpoint.
Thank you for your response. My qwen_turbo is an online API service that does not require local model deployment. For more details, you can check the official documentation: https://help.aliyun.com/zh/dashscope/developer-reference/api-details?spm=a2c4g.11186623.0.i6#341800c0f8w0r.
Hi @ImagineL ,
How are you serving your
qwen_turbo
model? In general, our plan is to support any model, as long as they can be wrapped into an openai-compatible endpoint.There are several tools that seem to offer a good wrapper for local models with an open-ai compatible endpoint.
- fastChat
- vllm
- llmstudio.ai
@victordibia Hello VictorDibia, I'm not sure if I expressed myself clearly. The 'qwen_turbo' is a large model that provides a RESTful API server for users. At the same time, it also offers a Python SDK to utilize its server. Therefore, users do not need to deploy the model locally on their machines." For more details, please refer to the official documentation: https://help.aliyun.com/zh/dashscope/developer-reference/api-details?spm=a2c4g.11186623.0.i6#341800c0f8w0r. I look forward to your reply. Thank you, VictorDibia.
@ImagineL you can either wrap qwen as a openai compatible model or follow this https://microsoft.github.io/autogen/blog/2024/01/26/Custom-Models to support it. Let me know if it works for you. Thanks.
Hello, I am currently using the qwen_turbo large model, which has its own online API service. However, I am unable to integrate this capability into AutoGen. Do you have plans to support qwen_turbo or similar large API model online services?