QwenLM / Qwen

The official repo of Qwen (通义千问) chat & pretrained large language model proposed by Alibaba Cloud.
Apache License 2.0
13.29k stars 1.08k forks source link

[BUG] Langchain Function Call Error #893

Closed Tejaswgupta closed 8 months ago

Tejaswgupta commented 8 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

Code:

from langchain.chat_models import ChatOpenAI
from langchain.agents import load_tools, initialize_agent, AgentType

llm = ChatOpenAI(
    model_name="Qwen",
    openai_api_base='http://20.124.240.6:8083/v1',
    openai_api_key="EMPTY",
    streaming=False,
)
tools = load_tools(
    ["arxiv"],
)
agent_chain = initialize_agent(
    tools,
    llm,
    agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
    verbose=True,
)
# TODO: The performance is okay with Chinese prompts, but not so good when it comes to English.
agent_chain.run("查一下论文 1605.08386 的信息")

Output:

> Entering new AgentExecutor chain...
Traceback (most recent call last):
  File "/Users/tejasw/Downloads/RAG_with_langchain/oo.py", line 20, in <module>
    agent_chain.run("查一下论文 1605.08386 的信息")
  File "/Users/tejasw/.pyenv/versions/3.11.2/lib/python3.11/site-packages/langchain/chains/base.py", line 507, in run
    return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tejasw/.pyenv/versions/3.11.2/lib/python3.11/site-packages/langchain/chains/base.py", line 312, in __call__
    raise e
  File "/Users/tejasw/.pyenv/versions/3.11.2/lib/python3.11/site-packages/langchain/chains/base.py", line 306, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/Users/tejasw/.pyenv/versions/3.11.2/lib/python3.11/site-packages/langchain/agents/agent.py", line 1312, in _call
    next_step_output = self._take_next_step(
                       ^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tejasw/.pyenv/versions/3.11.2/lib/python3.11/site-packages/langchain/agents/agent.py", line 1038, in _take_next_step
    [
  File "/Users/tejasw/.pyenv/versions/3.11.2/lib/python3.11/site-packages/langchain/agents/agent.py", line 1038, in <listcomp>
    [
  File "/Users/tejasw/.pyenv/versions/3.11.2/lib/python3.11/site-packages/langchain/agents/agent.py", line 1066, in _iter_next_step
    output = self.agent.plan(
             ^^^^^^^^^^^^^^^^
  File "/Users/tejasw/.pyenv/versions/3.11.2/lib/python3.11/site-packages/langchain/agents/agent.py", line 635, in plan
    full_output = self.llm_chain.predict(callbacks=callbacks, **full_inputs)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tejasw/.pyenv/versions/3.11.2/lib/python3.11/site-packages/langchain/chains/llm.py", line 293, in predict
    return self(kwargs, callbacks=callbacks)[self.output_key]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tejasw/.pyenv/versions/3.11.2/lib/python3.11/site-packages/langchain/chains/base.py", line 312, in __call__
    raise e
  File "/Users/tejasw/.pyenv/versions/3.11.2/lib/python3.11/site-packages/langchain/chains/base.py", line 306, in __call__
    self._call(inputs, run_manager=run_manager)
  File "/Users/tejasw/.pyenv/versions/3.11.2/lib/python3.11/site-packages/langchain/chains/llm.py", line 103, in _call
    response = self.generate([inputs], run_manager=run_manager)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tejasw/.pyenv/versions/3.11.2/lib/python3.11/site-packages/langchain/chains/llm.py", line 115, in generate
    return self.llm.generate_prompt(
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tejasw/.pyenv/versions/3.11.2/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 496, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tejasw/.pyenv/versions/3.11.2/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 388, in generate
    llm_output = self._combine_llm_outputs([res.llm_output for res in results])
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/tejasw/.pyenv/versions/3.11.2/lib/python3.11/site-packages/langchain_community/chat_models/openai.py", line 370, in _combine_llm_outputs
    for k, v in token_usage.items():
                ^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'items'

期望行为 | Expected Behavior

Should return the output by using the tool.

复现方法 | Steps To Reproduce

运行环境 | Environment

- OS: M1 , OSX 14.2
- Python: 3.11.2
- Transformers: 4.36.2
- PyTorch: 2.1.0 
- CUDA (`python -c 'import torch; print(torch.version.cuda)'`): NA
- Langchain: 0.0.353

备注 | Anything else?

NA

jklj077 commented 8 months ago

缺少复现信息:api哪来的,本repo的openai_api.py,还是vllm的openai api,还是fastchat的openai api。

用本repo的openai_api.py的话,openai-python需要降级至1.0以下,请看README。不是的话,不支持function call,请参考examples/react_demo.py构造模板,或参考https://github.com/QwenLM/Qwen/issues/726。

Tejaswgupta commented 8 months ago

@jklj077 the api comes from openai_api of this repo. I tried using with VLLM but the API keeps processing the request indefinitely. Any plans for supporting openai>1.0 ?

jklj077 commented 8 months ago

@Tejaswgupta There is a PR for this issue, but we haven't got time to review. You can give it a go.