microsoft / autogen

A programming framework for agentic AI 🤖
https://microsoft.github.io/autogen/
Creative Commons Attribution 4.0 International
31.56k stars 4.59k forks source link

Autogen local llm---->Qwen #296

Closed ScottXiao233 closed 1 month ago

ScottXiao233 commented 11 months ago

I follow the article https://microsoft.github.io/autogen/blog/2023/07/14/Local-LLMs#interact-with-model-using-oaicompletion, I use Qwen-7b as the base model. The openai_api part , I use the original code from Qwenhttps://github.com/QwenLM/Qwen/blob/main/openai_api.py, and here are my test code :

from autogen import oai

# create a text completion request
response = oai.Completion.create(
    config_list=[
        {
            "model": "qwen-7b",
            "api_base": "http://localhost:8000/v1",
            "api_type": "open_ai",
            "api_key": "NULL", # just a placeholder
        }
    ],
    prompt="who are you?",
)
print(response)

and there are the error infos:

[autogen.oai.completion: 10-19 15:07:27] {234} INFO - retrying in 10 seconds...
Traceback (most recent call last):
  File "D:\Anaconda3\envs\autogen\Lib\site-packages\openai\api_requestor.py", line 413, in handle_error_response
    error_data = resp["error"]
                 ~~~~^^^^^^^^^
KeyError: 'error'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "e:\Code\source\autogen\autogen\autogen\oai\completion.py", line 220, in _get_response
    response = openai_completion.create(request_timeout=request_timeout, **config)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Anaconda3\envs\autogen\Lib\site-packages\openai\api_resources\completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Anaconda3\envs\autogen\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 155, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "D:\Anaconda3\envs\autogen\Lib\site-packages\openai\api_requestor.py", line 299, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Anaconda3\envs\autogen\Lib\site-packages\openai\api_requestor.py", line 710, in _interpret_response
    self._interpret_response_line(
  File "D:\Anaconda3\envs\autogen\Lib\site-packages\openai\api_requestor.py", line 775, in _interpret_response_line
    raise self.handle_error_response(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Anaconda3\envs\autogen\Lib\site-packages\openai\api_requestor.py", line 415, in handle_error_response
    raise error.APIError(
openai.error.APIError: Invalid response object from API: '{"detail":"Not Found"}' (HTTP response code was 404)
Traceback (most recent call last):
  File "D:\Anaconda3\envs\autogen\Lib\site-packages\openai\api_requestor.py", line 413, in handle_error_response
    error_data = resp["error"]
                 ~~~~^^^^^^^^^
KeyError: 'error'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "e:\Code\source\autogen\autogen\autogen\oai\completion.py", line 220, in _get_response
    response = openai_completion.create(request_timeout=request_timeout, **config)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Anaconda3\envs\autogen\Lib\site-packages\openai\api_resources\completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Anaconda3\envs\autogen\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 155, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "D:\Anaconda3\envs\autogen\Lib\site-packages\openai\api_requestor.py", line 299, in request
    resp, got_stream = self._interpret_response(result, stream)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Anaconda3\envs\autogen\Lib\site-packages\openai\api_requestor.py", line 710, in _interpret_response
    self._interpret_response_line(
  File "D:\Anaconda3\envs\autogen\Lib\site-packages\openai\api_requestor.py", line 775, in _interpret_response_line
    raise self.handle_error_response(
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "D:\Anaconda3\envs\autogen\Lib\site-packages\openai\api_requestor.py", line 415, in handle_error_response
    raise error.APIError(
openai.error.APIError: Invalid response object from API: '{"detail":"Not Found"}' (HTTP response code was 404)

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "e:\Code\source\autogen\autogen\test_qwen.py", line 4, in <module>
    response = oai.Completion.create(
               ^^^^^^^^^^^^^^^^^^^^^^
  File "e:\Code\source\autogen\autogen\autogen\oai\completion.py", line 799, in create
    response = cls.create(
               ^^^^^^^^^^^
  File "e:\Code\source\autogen\autogen\autogen\oai\completion.py", line 830, in create
    return cls._get_response(params, raise_on_ratelimit_or_timeout=raise_on_ratelimit_or_timeout)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "e:\Code\source\autogen\autogen\autogen\oai\completion.py", line 235, in _get_response
    sleep(retry_wait_time)
KeyboardInterrupt

How can I solve the problem?

my env all the latest version, python=3.11.4

tommy3266 commented 11 months ago

on linux,python3.10.12,i fix it

HuntZhaozq commented 11 months ago

on linux,python3.10.12,i fix it

I all meet this problem, but I can not fix it. My env is python 3.10.13 on linux.

HuntZhaozq commented 11 months ago

By the way, what's the performance of local LLM. Does it successfully run ?

ScottXiao233 commented 11 months ago

on linux,python3.10.12,i fix it

By the way, what's the performance of local LLM. Does it successfully run ?

it seems like the autogen didn't access the right api, and I also tried the other api like http://localhost:8000/v1/chat/completion, but it didn't work.

ScottXiao233 commented 11 months ago

on linux,python3.10.12,i fix it

您是跟着微软的教程进行的吗?教程里面是用fastchat去生成的openai_api,我为了图省事,我就直接用的Qwen官方里面的openai_api.py,然后就是按照官方的那个oai的示例进行,然后就是这样了。我的这个图省事可能是造成这样的错误的原因吗?QAQ还是说就是一个linux和win的问题?

HuntZhaozq commented 11 months ago

on linux,python3.10.12,i fix it

您是跟着微软的教程进行的吗?教程里面是用fastchat去生成的openai_api,我为了图省事,我就直接用的Qwen官方里面的openai_api.py,然后就是按照官方的那个oai的示例进行,然后就是这样了。我的这个图省事可能是造成这样的错误的原因吗?QAQ还是说就是一个linux和win的问题?

对,我也是Qwen的生成api,可能是这个原因。。 顺便问一下,你现在有用本地模型跑通multi-agents的吗?

ScottXiao233 commented 11 months ago

on linux,python3.10.12,i fix it

您是跟着微软的教程进行的吗?教程里面是用fastchat去生成的openai_api,我为了图省事,我就直接用的Qwen官方里面的openai_api.py,然后就是按照官方的那个oai的示例进行,然后就是这样了。我的这个图省事可能是造成这样的错误的原因吗?QAQ还是说就是一个linux和win的问题?

对,我也是Qwen的生成api,可能是这个原因。。 顺便问一下,你现在有用本地模型跑通multi-agents的吗?

没有,头痛ing QAQ

HuntZhaozq commented 11 months ago

on linux,python3.10.12,i fix it

您是跟着微软的教程进行的吗?教程里面是用fastchat去生成的openai_api,我为了图省事,我就直接用的Qwen官方里面的openai_api.py,然后就是按照官方的那个oai的示例进行,然后就是这样了。我的这个图省事可能是造成这样的错误的原因吗?QAQ还是说就是一个linux和win的问题?

对,我也是Qwen的生成api,可能是这个原因。。 顺便问一下,你现在有用本地模型跑通multi-agents的吗?

没有,头痛ing QAQ

我也是,害,其他多agents项目也没跑通吗

ScottXiao233 commented 11 months ago

on linux,python3.10.12,i fix it

您是跟着微软的教程进行的吗?教程里面是用fastchat去生成的openai_api,我为了图省事,我就直接用的Qwen官方里面的openai_api.py,然后就是按照官方的那个oai的示例进行,然后就是这样了。我的这个图省事可能是造成这样的错误的原因吗?QAQ还是说就是一个linux和win的问题?

对,我也是Qwen的生成api,可能是这个原因。。 顺便问一下,你现在有用本地模型跑通multi-agents的吗?

没有,头痛ing QAQ

我也是,害,其他多agents项目也没跑通吗

没有,我也才接触这个没多久

HuntZhaozq commented 11 months ago

on linux,python3.10.12,i fix it

您是跟着微软的教程进行的吗?教程里面是用fastchat去生成的openai_api,我为了图省事,我就直接用的Qwen官方里面的openai_api.py,然后就是按照官方的那个oai的示例进行,然后就是这样了。我的这个图省事可能是造成这样的错误的原因吗?QAQ还是说就是一个linux和win的问题?

对,我也是Qwen的生成api,可能是这个原因。。 顺便问一下,你现在有用本地模型跑通multi-agents的吗?

没有,头痛ing QAQ

我也是,害,其他多agents项目也没跑通吗

没有,我也才接触这个没多久

我也是,刚测了fastchat部署的qwen可以正常用!就是效果不好。。。你看看你那边效果咋样

13331112522 commented 11 months ago

有没有试过直接调用qwen-turbo或plus来运行autogen的,不用本地模型

taylor-ennen commented 11 months ago

I'm using LM Studio to surface the inference Server with a similar config:

config = [
    {
        "api_type": "open_ai",
        "api_base": "http://localhost:1234/v1",
        "api_key": "NULL"
    }
]

And getting a similar error at the end of my traceback:

  File "C:\Python311\Lib\site-packages\openai\api_requestor.py", line 346, in handle_error_response
    error_code=error_data.get("code"),
               ^^^^^^^^^^^^^^

Although this error is in OpenAI library it seems there is a common problem with interfacing a mock open_ai, API Type. I have not found an alternative around this but thought this context was worth adding to the problem raised in this Issue.

lishiming9 commented 11 months ago

I have the same problem using local llm

env: python 3.11.4 macOS 13.5

[autogen.oai.completion: 10-20 17:02:55] {223} INFO - retrying in 10 seconds... Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/api_requestor.py", line 403, in handle_error_response error_data = resp["error"]


KeyError: 'error'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/autogen/oai/completion.py", line 207, in _get_response
    response = openai_completion.create(**config)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 25, in create
    return super().create(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
    response, _, api_key = requestor.request(
                           ^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/api_requestor.py", line 298, in request
...
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/openai/api_requestor.py", line 405, in handle_error_response
    raise error.APIError(
openai.error.APIError: Invalid response object from API: '{"detail":"Invalid request"}' (HTTP response code was 400)
robzsaunders commented 11 months ago

I'm using LM Studio to surface the inference Server with a similar config:

config = [
    {
        "api_type": "open_ai",
        "api_base": "http://localhost:1234/v1",
        "api_key": "NULL"
    }
]

And getting a similar error at the end of my traceback:

  File "C:\Python311\Lib\site-packages\openai\api_requestor.py", line 346, in handle_error_response
    error_code=error_data.get("code"),
               ^^^^^^^^^^^^^^

Although this error is in OpenAI library it seems there is a common problem with interfacing a mock open_ai, API Type. I have not found an alternative around this but thought this context was worth adding to the problem raised in this Issue.

This looks like the same problem from over here: #279

GXKIM commented 11 months ago

on linux,python3.10.12,i fix it

您是跟着微软的教程进行的吗?教程里面是用fastchat去生成的openai_api,我为了图省事,我就直接用的Qwen官方里面的openai_api.py,然后就是按照官方的那个oai的示例进行,然后就是这样了。我的这个图省事可能是造成这样的错误的原因吗?QAQ还是说就是一个linux和win的问题?

对,我也是Qwen的生成api,可能是这个原因。。 顺便问一下,你现在有用本地模型跑通multi-agents的吗?

没有,头痛ing QAQ

我也是,害,其他多agents项目也没跑通吗

使用fastchat里面的model worker启动,本地模型是可以正常使用的

HuntZhaozq commented 11 months ago

on linux,python3.10.12,i fix it

您是跟着微软的教程进行的吗?教程里面是用fastchat去生成的openai_api,我为了图省事,我就直接用的Qwen官方里面的openai_api.py,然后就是按照官方的那个oai的示例进行,然后就是这样了。我的这个图省事可能是造成这样的错误的原因吗?QAQ还是说就是一个linux和win的问题?

对,我也是Qwen的生成api,可能是这个原因。。 顺便问一下,你现在有用本地模型跑通multi-agents的吗?

没有,头痛ing QAQ

我也是,害,其他多agents项目也没跑通吗

使用fastchat里面的model worker启动,本地

on linux,python3.10.12,i fix it

您是跟着微软的教程进行的吗?教程里面是用fastchat去生成的openai_api,我为了图省事,我就直接用的Qwen官方里面的openai_api.py,然后就是按照官方的那个oai的示例进行,然后就是这样了。我的这个图省事可能是造成这样的错误的原因吗?QAQ还是说就是一个linux和win的问题?

对,我也是Qwen的生成api,可能是这个原因。。 顺便问一下,你现在有用本地模型跑通multi-agents的吗?

没有,头痛ing QAQ

我也是,害,其他多agents项目也没跑通吗

使用fastchat里面的model worker启动,本地模型是可以正常使用的

朋友,你用的效果怎么样?

GXKIM commented 11 months ago

baichuan2-13b qwen-14b 都不太行

On Tue, Oct 24, 2023 at 16:02 HuntZhaozq @.***> wrote:

on linux,python3.10.12,i fix it

您是跟着微软的教程进行的吗?教程里面是用fastchat去生成的openai_api,我为了图省事,我就直接用的Qwen官方里面的openai_api.py,然后就是按照官方的那个oai的示例进行,然后就是这样了。我的这个图省事可能是造成这样的错误的原因吗?QAQ还是说就是一个linux和win的问题?

对,我也是Qwen的生成api,可能是这个原因。。 顺便问一下,你现在有用本地模型跑通multi-agents的吗?

没有,头痛ing QAQ

我也是,害,其他多agents项目也没跑通吗

使用fastchat里面的model worker启动,本地

on linux,python3.10.12,i fix it

您是跟着微软的教程进行的吗?教程里面是用fastchat去生成的openai_api,我为了图省事,我就直接用的Qwen官方里面的openai_api.py,然后就是按照官方的那个oai的示例进行,然后就是这样了。我的这个图省事可能是造成这样的错误的原因吗?QAQ还是说就是一个linux和win的问题?

对,我也是Qwen的生成api,可能是这个原因。。 顺便问一下,你现在有用本地模型跑通multi-agents的吗?

没有,头痛ing QAQ

我也是,害,其他多agents项目也没跑通吗

使用fastchat里面的model worker启动,本地模型是可以正常使用的

朋友,你用的效果怎么样?

— Reply to this email directly, view it on GitHub https://github.com/microsoft/autogen/issues/296#issuecomment-1776717734, or unsubscribe https://github.com/notifications/unsubscribe-auth/A56NGIPN4DJ72AESFPU4WG3YA5Y2NAVCNFSM6AAAAAA6GWI47OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTONZWG4YTONZTGQ . You are receiving this because you commented.Message ID: @.***>

neocao123 commented 11 months ago

我这边试了qwen-14b-4bit,感觉不行.

另外有效果还可以的local llm推荐么,英文的也可以

ImagineL commented 11 months ago

有没有试过直接调用qwen-turbo或plus来运行autogen的,不用本地模型

试过。没成功

GXKIM commented 11 months ago

试过,可以正常使用,效果不佳

ImagineL @.***>于2023年10月31日 周二15:19写道:

有没有试过直接调用qwen-turbo或plus来运行autogen的,不用本地模型

试过。没成功

— Reply to this email directly, view it on GitHub https://github.com/microsoft/autogen/issues/296#issuecomment-1786607373, or unsubscribe https://github.com/notifications/unsubscribe-auth/A56NGINJU2GUTAX562PQDI3YCCQ77AVCNFSM6AAAAAA6GWI47OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOBWGYYDOMZXGM . You are receiving this because you commented.Message ID: @.***>

ImagineL commented 11 months ago

试过,可以正常使用,效果不佳 ImagineL @.>于2023年10月31日 周二15:19写道: 有没有试过直接调用qwen-turbo或plus来运行autogen的,不用本地模型 试过。没成功 — Reply to this email directly, view it on GitHub <#296 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A56NGINJU2GUTAX562PQDI3YCCQ77AVCNFSM6AAAAAA6GWI47OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOBWGYYDOMZXGM . You are receiving this because you commented.Message ID: @.>

你本地部署大模型吗?还是用灵积的API?

GXKIM commented 11 months ago

都试过

ImagineL @.***>于2023年10月31日 周二15:24写道:

试过,可以正常使用,效果不佳 ImagineL @.

>于2023年10月31日 周二15:19写道: … <#m959486082012953280> 有没有试过直接调用qwen-turbo或plus来运行autogen的,不用本地模型 试过。没成功 — Reply to this email directly, view it on GitHub <#296 (comment) https://github.com/microsoft/autogen/issues/296#issuecomment-1786607373>, or unsubscribe https://github.com/notifications/unsubscribe-auth/A56NGINJU2GUTAX562PQDI3YCCQ77AVCNFSM6AAAAAA6GWI47OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOBWGYYDOMZXGM https://github.com/notifications/unsubscribe-auth/A56NGINJU2GUTAX562PQDI3YCCQ77AVCNFSM6AAAAAA6GWI47OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOBWGYYDOMZXGM . You are receiving this because you commented.Message ID: @.>

你本地部署大模型吗?还是用灵积的API?

— Reply to this email directly, view it on GitHub https://github.com/microsoft/autogen/issues/296#issuecomment-1786619546, or unsubscribe https://github.com/notifications/unsubscribe-auth/A56NGIONBPWXMGITHCSE65LYCCRRTAVCNFSM6AAAAAA6GWI47OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTOOBWGYYTSNJUGY . You are receiving this because you commented.Message ID: @.***>

ImagineL commented 11 months ago

@GXKIM 我也是,失败了。。我看源码这和openai绑定都贼死,很难直接去支持通义千问了。

bigjonyz commented 10 months ago

我现在用dolphin-2.1-mistral, zephyr beta, code-llama 配合LM studio都可以运行autogen,但是所有回复不是很稳定,各个Agent感觉不是按照指定的脚本在执行。之前经常有各种error,LM studio 升级后用ChatML服务器设定好像更稳定,还没有具体测试。

wangjeaf commented 4 months ago

@GXKIM 我也是,失败了。。我看源码这和openai绑定都贼死,很难直接去支持通义千问了。

朋友,你最终搞通了吗?我这边qwen-max使用GroupChat也失败了

thinkall commented 3 months ago

@GXKIM 我也是,失败了。。我看源码这和openai绑定都贼死,很难直接去支持通义千问了。

朋友,你最终搞通了吗?我这边qwen-max使用GroupChat也失败了

可以使用工具把qwen的接口包装成openai API兼容接口,然后就可以用了。

LIUTAOGE commented 3 months ago

https://help.aliyun.com/zh/dashscope/developer-reference/compatibility-of-openai-with-dashscope/ 阿里灵积api,也兼容openai的方式,不需要用工具转,直接参考上面链接