QwenLM / Qwen-Agent

Agent framework and applications built upon Qwen2, featuring Function Calling, Code Interpreter, RAG, and Chrome extension.
https://pypi.org/project/qwen-agent/
Other
2.51k stars 249 forks source link

funtion_calling 调用不到工具 #215

Open Mosquito0352 opened 1 week ago

Mosquito0352 commented 1 week ago

[{'role': 'assistant', 'content': "I'm sorry, but as an AI language model, I don't have real-time access to current weather information. However, you can easily check the weather in San Francisco by searching online for a reliable weather website or app, such as Weather.gov or AccuWeather, or by asking a virtual assistant like Siri, Google Assistant, or Alexa for the current conditions. They should be able to give you an up-to-date report."}]

JianxinMa commented 1 week ago

Please provide the code required to reproduce this case. Additionally, are you using Qwen-Agent, Langchain, or the OpenAI API directly?

Mosquito0352 commented 1 week ago

Reference: https://platform.openai.com/docs/guides/function-calling

import json import os

from qwen_agent.llm import get_chat_model

Example dummy function hard coded to return the same weather

In production, this could be your backend API or an external API

def get_current_weather(location, unit='fahrenheit'): """Get the current weather in a given location""" if 'tokyo' in location.lower(): return json.dumps({'location': 'Tokyo', 'temperature': '10', 'unit': 'celsius'}) elif 'san francisco' in location.lower(): return json.dumps({'location': 'San Francisco', 'temperature': '72', 'unit': 'fahrenheit'}) elif 'paris' in location.lower(): return json.dumps({'location': 'Paris', 'temperature': '22', 'unit': 'celsius'}) else: return json.dumps({'location': location, 'temperature': 'unknown'})

def test1(): llm = get_chat_model({

Use the model service provided by DashScope:

    'model': 'qwen1_5_7b',
    'model_server': 'http://127.0.0.1:11434/v1',
    'api_key': 'EMPTY',

    # Use the model service provided by Together.AI:
    # 'model': 'Qwen/Qwen1.5-14B-Chat',
    # 'model_server': 'https://api.together.xyz',  # api_base
    # 'api_key': os.getenv('TOGETHER_API_KEY'),

    # Use your own model service compatible with OpenAI API:
    # 'model': 'Qwen/Qwen1.5-72B-Chat',
    # 'model_server': 'http://localhost:8000/v1',  # api_base
    # 'api_key': 'EMPTY',
})

# Step 1: send the conversation and available functions to the model
messages = [{'role': 'user', 'content': "What's the weather like in San Francisco?"}]
functions = [{
    'name': 'get_current_weather',
    'description': 'Get the current weather in a given location',
    'parameters': {
        'type': 'object',
        'properties': {
            'location': {
                'type': 'string',
                'description': 'The city and state, e.g. San Francisco, CA',
            },
            'unit': {
                'type': 'string',
                'enum': ['celsius', 'fahrenheit']
            },
        },
        'required': ['location'],
    },
}]

print('# Assistant Response 1:')
responses = []
for responses in llm.chat(messages=messages, functions=functions, stream=True):
    print(responses)

messages.extend(responses)  # extend conversation with assistant's reply

# Step 2: check if the model wanted to call a function
last_response = messages[-1]
if last_response.get('function_call', None):

    # Step 3: call the function
    # Note: the JSON response may not always be valid; be sure to handle errors
    available_functions = {
        'get_current_weather': get_current_weather,
    }  # only one function in this example, but you can have multiple
    function_name = last_response['function_call']['name']
    function_to_call = available_functions[function_name]
    function_args = json.loads(last_response['function_call']['arguments'])
    function_response = function_to_call(
        location=function_args.get('location'),
        unit=function_args.get('unit'),
    )
    print('# Function Response:')
    print(function_response)

    # Step 4: send the info for each function call and function response to the model
    messages.append({
        'role': 'function',
        'name': function_name,
        'content': function_response,
    })  # extend conversation with function response

    print('# Assistant Response 2:')
    for responses in llm.chat(
            messages=messages,
            functions=functions,
            stream=True,
    ):  # get a new response from the model where it can see the function response
        print(responses)

if name == 'main': test1()

JianxinMa commented 1 week ago

Using the qwen1.5 7b provided by dashscope,

        'model': 'qwen1.5-7b-chat',
        'model_server': 'https://dashscope.aliyuncs.com/compatible-mode/v1',
        'api_key': os.getenv('DASHSCOPE_API_KEY'),

it works fine.

I suspect that the model you are using with Ollama might be problematic. Are you using the official checkpoint from Ollama, or is it a custom one that you have created?

Mosquito0352 commented 1 week ago

模型是本地ollama创建的 这个我要本地部署的话需要从哪里下载模型呢

JianxinMa commented 1 week ago

模型是本地ollama创建的 这个我要本地部署的话需要从哪里下载模型呢

参考qwen2的readme https://github.com/QwenLM/Qwen2?tab=readme-ov-file#ollama

ollama serve ,再 ollama run qwen:7b (对应qwen1.5)或ollama run qwen2:7b(对应qwen2),就会自动下载ollama预先提供好的模型。

我正在尝试ollama上的模型是否正常(但是网络状况很糟糕,目前还没下载成功)

Mosquito0352 commented 1 week ago

感谢回复 我这边测试下

deku0818 commented 6 days ago

Please provide the code required to reproduce this case. Additionally, are you using Qwen-Agent, Langchain, or the OpenAI API directly?请提供复现此案例所需的代码。另外,您是直接使用 Qwen-Agent、Langchain 还是 OpenAI API?

qwen2支持 openai一样的工具调用方式?