Closed jqlts1 closed 1 month ago
请给出你的请求参数。 看报错信息,问题好像出在解析图片时,没有写入图片内容。
请给出你的请求参数。 看报错信息,问题好像出在解析图片时,没有写入图片内容。
我也不太清楚怎么请求的,就是在用dify的工具,以及lobe chat的插件,如果是gemini的模型都会出现这个错误 但是如果是普通的聊天,就不会有这个问题
请给出你的请求参数。 看报错信息,问题好像出在解析图片时,没有写入图片内容。
我是按照这个例子 https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models
在执行的时候,是不会出现函数调用的
import json
from openai import OpenAI
from tenacity import retry, wait_random_exponential, stop_after_attempt
GPT_MODEL = "gemini-1.5-pro-latest"
client = OpenAI(
api_key="sk-xxxxxxxx",
base_url="api的地址",
)
@retry(wait=wait_random_exponential(multiplier=1, max=40), stop=stop_after_attempt(3))
def chat_completion_request(messages, tools=None, tool_choice=None, model=GPT_MODEL):
try:
response = client.chat.completions.create(
model=model,
messages=messages,
tools=tools,
tool_choice=tool_choice,
)
return response
except Exception as e:
print("Unable to generate ChatCompletion response")
print(f"Exception: {e}")
return e
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
},
"required": ["location", "format"],
},
},
},
{
"type": "function",
"function": {
"name": "get_n_day_weather_forecast",
"description": "Get an N-day weather forecast",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"format": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
"description": "The temperature unit to use. Infer this from the users location.",
},
"num_days": {
"type": "integer",
"description": "The number of days to forecast",
},
},
"required": ["location", "format", "num_days"],
},
},
},
]
## 以下是测试
### 测试例子一
messages = []
messages.append(
{
"role": "system",
"content": "Don't make assumptions about what values to plug into functions. Ask for clarification if a user request is ambiguous.",
}
)
messages.append({"role": "user", "content": "What's the weather like today"})
chat_response = chat_completion_request(messages, tools=tools)
assistant_message = chat_response.choices[0].message
messages.append(assistant_message)
assistant_message
### 测试例子二
messages.append({"role": "user", "content": "I'm in Glasgow, Scotland."})
chat_response = chat_completion_request(messages, tools=tools)
assistant_message = chat_response.choices[0].message
messages.append(assistant_message)
assistant_message
对了,同样的问题,在minimax上也会报错, ( 在测试例子二 ) ,minimax例子一是可以过的
已修复,等待dev编译完成,在试一遍。 因为之前转换函数判断的是 role==function, 现在这个角色已经弃用,应该判断role==tools。
已修复,等待dev编译完成,在试一遍。 因为之前转换函数判断的是 role==function, 现在这个角色已经弃用,应该判断role==tools。
请问一下怎么测试?docker好像镜像没更新?
docker pull ghcr.io/martialbe/one-api:dev
dev是当前的main分支最新的代码编译的。
docker pull ghcr.io/martialbe/one-api:dev
dev是当前的main分支最新的代码编译的。
好像又出现了新的问题,下面是具体的错误, ( 可以正常的调用工具,但是好像又出了新的问题
>>>>>>>> USING AUTO REPLY...
Traceback (most recent call last):
File "/Users/zhangte/Documents/iChainPi/code/0_inbox/crew_ai_test/test1.py", line 58, in <module>
chat_result = user_proxy.initiate_chat(
File "/Users/zhangte/miniconda3/envs/crewai/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 1000, in initiate_chat
self.send(msg2send, recipient, request_reply=True, silent=silent)
File "/Users/zhangte/miniconda3/envs/crewai/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 645, in send
recipient.receive(message, self, request_reply, silent)
File "/Users/zhangte/miniconda3/envs/crewai/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 808, in receive
reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
File "/Users/zhangte/miniconda3/envs/crewai/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 1949, in generate_reply
final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
File "/Users/zhangte/miniconda3/envs/crewai/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 1315, in generate_oai_reply
extracted_response = self._generate_oai_reply_from_client(
File "/Users/zhangte/miniconda3/envs/crewai/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 1334, in _generate_oai_reply_from_client
response = llm_client.create(
File "/Users/zhangte/miniconda3/envs/crewai/lib/python3.10/site-packages/autogen/oai/client.py", line 638, in create
response = client.create(params)
File "/Users/zhangte/miniconda3/envs/crewai/lib/python3.10/site-packages/autogen/oai/client.py", line 285, in create
response = completions.create(**params)
File "/Users/zhangte/miniconda3/envs/crewai/lib/python3.10/site-packages/openai/_utils/_utils.py", line 277, in wrapper
return func(*args, **kwargs)
File "/Users/zhangte/miniconda3/envs/crewai/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 590, in create
return self._post(
File "/Users/zhangte/miniconda3/envs/crewai/lib/python3.10/site-packages/openai/_base_client.py", line 1240, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File "/Users/zhangte/miniconda3/envs/crewai/lib/python3.10/site-packages/openai/_base_client.py", line 921, in request
return self._request(
File "/Users/zhangte/miniconda3/envs/crewai/lib/python3.10/site-packages/openai/_base_client.py", line 1005, in _request
return self._retry_request(
File "/Users/zhangte/miniconda3/envs/crewai/lib/python3.10/site-packages/openai/_base_client.py", line 1053, in _retry_request
return self._request(
File "/Users/zhangte/miniconda3/envs/crewai/lib/python3.10/site-packages/openai/_base_client.py", line 1005, in _request
return self._retry_request(
File "/Users/zhangte/miniconda3/envs/crewai/lib/python3.10/site-packages/openai/_base_client.py", line 1053, in _retry_request
return self._request(
File "/Users/zhangte/miniconda3/envs/crewai/lib/python3.10/site-packages/openai/_base_client.py", line 1020, in _request
raise self._make_status_error_from_response(err.response) from None
openai.InternalServerError: Error code: 500 - {'error': {'message': 'Panic detected, error: runtime error: invalid memory address or nil pointer dereference. Please submit a issue here: https://github.com/MartialBE/one-api', 'type': 'one_api_panic'}}
我用的是autogen,openai的那个代码我有测试过,好像返回的也是没识别到工具,以下是代码
import datetime
from autogen import ConversableAgent, register_function
def get_datetime() -> str:
"""用于获取时间的函数,执行会返回当前时间"""
return str(datetime.datetime.now())
api_key = "sk-x"xxx
base_url = "https://api_base"
# Let's first define the assistant agent that suggests tool calls.
assistant = ConversableAgent(
name="Assistant",
system_message="You are a helpful AI assistant. "
"你遇到解决不了的问题,会使用工具"
"Return 'TERMINATE' when the task is done.",
llm_config={
"config_list": [
{
"model": "gemini-1.5-flash-latest",
"api_key": api_key,
"base_url": base_url,
}
]
},
)
# 定义用户代理
user_proxy = ConversableAgent(
name="User",
llm_config=False,
is_termination_msg=lambda msg: msg.get("content") is not None
and "TERMINATE" in msg["content"],
human_input_mode="NEVER",
)
# 注册工具
register_function(
get_datetime,
caller=assistant, # The assistant agent can suggest calls to the calculator.
executor=user_proxy, # The user proxy agent can execute the calculator calls.
name="get_datetime", # By default, the function name is used as the tool name.
description="用于查询具体时间的工具?", # A description of the tool.
)
# 询问
chat_result = user_proxy.initiate_chat(
assistant, message="今天是几月几号?", max_turns=2
)
print(chat_result)
docker pull ghcr.io/martialbe/one-api:dev
dev是当前的main分支最新的代码编译的。
似乎在dify调用的时候,也会出现同样的问你题
对了,上面的问题用 minimax 就不会报错 , 应该是gemini还有问题
再试试, gemini 因为需要函数名称发送过去,而autogen
在请求时没有携带函数名。
dify
在请求时使用的tool
,但是在返回函数内容时使用的function call
。
都做了处理了。
你可以试试: gemini baidu minimax zhipu xunfei 都使用你的代码和difly测试了
baidu 和 xunfei 一如即往的菜... 特别是xunfei 感觉看不懂英文函数说明,一顿乱调用..
例行检查
问题描述 gemini在函数调用后,会错误,返回的时候,接口似乎没有对齐?
相关截图![961715926896_ pic](https://github.com/MartialBE/one-api/assets/17434370/78a75ef8-bf94-458d-b981-9078002ce085)