Closed mikulskibartosz closed 11 months ago
Have you verified that you've received access to the new API yet? I ran this example earlier and it worked as expected for me.
I'm experiencing the same issue with OpenAI 1.1.1 and Python 3.10.12.
Workaround is to change the None
values in the response_message
to acceptable ones:
if response_message.content is None:
response_message.content = ""
if response_message.function_call is None:
del response_message.function_call
Add this after the line response_message = response.choices[0].message
(line 44)
EDIT: The proper fix is to upgrade pydantic
with pip install --upgrade pydantic
@RobertCraigie, I think it's unrelated to having access to the new API or the model because it fails during the second call, line: second_response = openai.chat.completions.create(
. The first one worked fine, and I received the function name with its arguments.
However, if it's an API access issue, the error message is misleading.
@unconv del response_message.function_call
removes the function with its arguments from the conversation, so it won't be available to the AI model while it generates subsequent messages. It may affect the results because the model can access the function's response (that you provide) but doesn't know the query anymore (it's removed from the chat).
@mikulskibartosz It won't affect the results since it is deleted only if it is None
(see the if statement it's wrapped in). The problem is the library is sending content
and function_call
as None
but the API doesn't allow that. Setting content
to an empty string in case it is None
and removing function_call
altogether when it is None
solves the issue.
EDIT: No need to remove function_call
- I only needed it at first because I used the following:
response_message = dict(response.choices[0].message)
if response_message["content"] is None:
response_message["content"] = ""
Without casting to a dict
it is sufficient to set the content to an empty string if it is None
@unconv patch is working, although I do needed to remove the function call as it was None as well, tried replacing it what "" but did not work, in order to make the example work:
response_message = response.choices[0].message
tool_calls = response_message.tool_calls
response_message = dict(response.choices[0].message)
if response_message["content"] is None:
response_message["content"] = ""
if response_message["function_call"] is None:
del response_message["function_call"]
@bgonzalezfractal Thank you. I solve this problem with your solution.
@unconv - thank you! was getting stumped on options
Code commented - func = search.api
# Search API Function - duckpy
def search_api(input):
#print ("duck-duck-go is searching for:", input)
content = duckduckgo_client.search(input)
#print ('duck-duck-go', content)
return str(content)
# Functions Dict
func_dict = {
"search_api": search_api, # Search Function
}
# OpenAI - Check/Run Functions (tools)
# User Input from Front-End
def run_conversation(user_input): # OpenAI - Tools/Functions
#print ("incoming_msg", user_input)
messages = [{"role": "user", "content": user_input}]
tools = [
{
"type": "function",
"function": {
"name": "get_online_data",
"description": "Get real-time data from the web",
"parameters": {
"type": "object",
"properties": {
"input": {
"type": "string",
"description": "What is the latest stock price of Apple?",
},
},
"required": ["input"],
},
},
}
]
response = openai.chat.completions.create(
model="gpt-3.5-turbo-1106",
messages=messages,
tools=tools,
tool_choice="auto",
)
response_message = response.choices[0].message
#print ("1st response_message", response_message)
total_tokens = response.usage.total_tokens
# If there are no tool calls, concatenate the content and total tokens, AI response is complete
if response.choices[0].message.tool_calls is None:
# Format the output as desired, here it's concatenated as a string
return response_message.content + " Total tokens used: " + str(total_tokens)
tool_calls = response_message.tool_calls
response_message = dict(response.choices[0].message) # convert to dict to make it mutable
if response_message["function_call"] is None: # if there are no function calls, remove the key
del response_message["function_call"] # remove the key
# print ("response_message after key removal", response_message)
# response_message after key removal {'content': None, 'role': 'assistant', 'tool_calls': [ChatCompletionMessageToolCall(id='call_HWJSKeYTBb8TEbYe96RhZOgU', function=Function(arguments='{"input":"latest news on maui"}', name='get_online_data'), type='function')]}
# Step 2: check if the model wanted to call a function
if tool_calls: # if there are function calls
# Step 3: call the function
available_functions = {
"get_online_data": search_api,
} # only one function in this example, but you can have multiple
messages.append(response_message) # extend conversation with assistant's reply
# Step 4: send the info for each function call and function response to the model
for tool_call in tool_calls:
function_name = tool_call.function.name # 'tool_calls' .function.name='get_online_data'
function_to_call = available_functions[function_name] # 'available_functions' = "get_online_data" [could be more than one function]
function_args = json.loads(tool_call.function.arguments) # 'tool_call' .function.arguments='{"input":"latest news on israel"}'
function_response = function_to_call(**function_args) # now call the function(s) with the arguments
messages.append(
{
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": function_response,
}
)
second_response = openai.chat.completions.create(
model="gpt-3.5-turbo-1106",
messages=messages,
)
print ("second_response", second_response)
# second_response ChatCompletion(id='chatcmpl-8IzEv0nQ0CBj8hkYZEd60GgiAT3m3', choices=[Choice(finish_reason='stop', index=0, message=ChatCompletionMessage(content='The latest news in Dallas includes breaking stories and investigative journalism on various topics such as crime, education, environment, healthcare, politics, and sports. There are also updates on local news, weather, safety, and events in the Dallas area. Additionally, there are news reports about the Dallas Cowboys and other local sports teams. To read more about the latest news in Dallas, you can visit websites such as Dallas News, Dallas Observer, NBC 5 Dallas-Fort Worth, FOX 4 News Dallas-Fort Worth, and others.', role='assistant', function_call=None, tool_calls=None))], created=1699536001, model='gpt-3.5-turbo-1106', object='chat.completion', system_fingerprint='fp_eeff13170a', usage=CompletionUsage(completion_tokens=106, prompt_tokens=2464, total_tokens=2570))
return second_response
# Optional: second response from the model
# formatted_answer = second_response.choices[0].message.content + " Total tokens used: " + str(answer.usage.total_tokens)```
Is this report for the example defined here? https://platform.openai.com/docs/guides/function-calling/parallel-function-calling
Or is it somewhere else? That example runs successfully without modification for me.
Yes. It was this example
@mikulskibartosz @unconv @cjpark-data @miwiley
Hi!
We're looking into this issue, but have not been able to reproduce this on our end. From your reports, we suspect that this is due to the first API request missing content
in its response message, which would be a bug. This causes the second API request to fail since you are just adding the message from the API response of the first request. However, we have not been able to observe this behavior in our API either. Given that we are using the exact same snippet, this is very odd.
Is this something that occurs consistently on your end? If possible, we'd love to see the full output of the FIRST request as well. We appreciate your help in helping us figure this out!
@enochcheung @mikulskibartosz @cjpark-data @miwiley
I wasn't able to reproduce the problem when running in a virtual environment, so I checked the difference between pip freeze
in the in the virtual environment and without it.
Turns out pydantic==1.10.12
was the issue.
Running pip install --upgrade pydantic
solves the problem.
Interesting! Thank you for digging into this, this is super useful for us. We'll look into ways to try to be more resilient here.
The fail , success, and upgrade of pydantic @unconv @enochcheung
../env/lib/python3.9/site-packages
Name: openai Version: 1.2.0
Name: anyio Version: 3.7.1
Name: distro Version: 1.8.0
Name: httpx Version: 0.25.0
Name: pydantic Version: 1.10.12 <---BEFORE UPGRADE
Name: tqdm Version: 4.66.1
Name: typing_extensions Version: 4.8.0
import openai from openai import OpenAI import logging import json from duckpy import Client import re from bs4 import BeautifulSoup
FAIL =
tool_calls = response_message.tool_calls
print ("Tools being called", response_message.tool_calls)
response_message = response.choices[0].message <---------
SUCCESS =
response_message = dict(response.choices[0].message) <--- convert to dict
if response_message["function_call"] <---check
del response_message["function_call"] <--del
ask_gpt called latest news in maui <--my route/module call User Input: latest news in maui <--incoming Initial Messages: [{'role': 'user', 'content': 'latest news in maui'}]
[FIRST RESPONSE] Initial API response ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_UW87QII3EMQZMSwY26pQvK8L', function=Function(arguments='{"input":"latest news in Maui"}', name='get_online_data'), type='function')]) Total tokens used 103
[FUNCTION CALL] Tools being called [ChatCompletionMessageToolCall(id='call_UW87QII3EMQZMSwY26pQvK8L', function=Function(arguments='{"input":"latest news in Maui"}', name='get_online_data'), type='function')]
[ERROR] Error code: 400 - {'error': {'message': "'content' is a required property - 'messages.1'", 'type': 'invalid_request_error', 'param': None, 'code': None}}
ask_gpt called latest news in maui <--my route/module call User Input: latest news in maui <--incoming Initial Messages: [{'role': 'user', 'content': 'latest news in maui'}]
[FIRST RESPONSE] Initial API response ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_PspTCyFADZx5Iiw3Qvr2Nxwo', function=Function(arguments='{"input":"latest news in Maui"}', name='get_online_data'), type='function')]) Total tokens used 103
[FUNCTION CALL] Tools being called [ChatCompletionMessageToolCall(id='call_PspTCyFADZx5Iiw3Qvr2Nxwo', function=Function(arguments='{"input":"latest news in Maui"}', name='get_online_data'), type='function')]
[SECOND RESPONSE] second_response ChatCompletion(id='chatcmpl-8JI6I5zb7GHBjnQzjYPWb7j7MjKcI', choices=[Choice(finish_reason='stop', index=0, message=ChatCompletionMessage(content="The latest news in Maui includes updates on the recovery efforts following the devastating wildfires, the announcement of a $150 million fund to help wildfire victims' families and survivors, and ongoing efforts to support affected residents. There are also reports on the impact of the wildfires on the community and the progress in the search for missing individuals. Additionally, there are updates on various relief efforts and initiatives aimed at supporting those affected by the wildfires.", role='assistant', function_call=None, tool_calls=None))], created=1699608502, model='gpt-3.5-turbo-1106', object='chat.completion', system_fingerprint='fp_eeff13170a', usage=CompletionUsage(completion_tokens=84, prompt_tokens=2884, total_tokens=2968))
pip3 install --upgrade pydantic --- Successfully installed pydantic-2.4.2 pydantic-core-2.10.1 --- removed dict,del (reverted to example method) response_message = response.choices[0].message <--- tool_calls = response_message.tool_calls
[LOGS] ask_gpt called latest news in maui User Input: latest news in maui Initial Messages: [{'role': 'user', 'content': 'latest news in maui'}]
[FIRST RESPONSE]
Initial API response ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_oC6CEoCRsAzaqroMgD8cLIQl', function=Function(arguments='{"input":"latest news in Maui"}', name='get_online_data'), type='function')]) Total tokens used 103
[FUNCTION CALL]
Tools being called [ChatCompletionMessageToolCall(id='call_oC6CEoCRsAzaqroMgD8cLIQl', function=Function(arguments='{"input":"latest news in Maui"}', name='get_online_data'), type='function')]
[SECOND RESPONSE]
second_response ChatCompletion(id='chatcmpl-8JIPORfrJXSmH3oHfTmFTasCF6AP4', choices=[Choice(finish_reason='stop', index=0, message=ChatCompletionMessage(content='Some of the latest news in Maui include updates on the recovery efforts following the wildfires, the creation of a fund to help wildfire victims and survivors, contracts awarded for temporary elementary schools, and ongoing updates about the impact and aftermath of the wildfires. For more detailed information, you can visit websites such as Maui Now, KHON2, and the Maui News.', role='assistant', function_call=None, tool_calls=None))], created=1699609686, model='gpt-3.5-turbo-1106', object='chat.completion', system_fingerprint='fp_eeff13170a', usage=CompletionUsage(completion_tokens=71, prompt_tokens=2908, total_tokens=2979))
Thanks for the details and figuring out the cause was Pydantic v1! I've identified the difference in behaviour internally and should have a fix out later today.
This will be fixed in the next release! https://github.com/openai/openai-python/pull/776/files
@enochcheung @mikulskibartosz @cjpark-data @miwiley
I wasn't able to reproduce the problem when running in a virtual environment, so I checked the difference between
pip freeze
in the in the virtual environment and without it.Turns out
pydantic==1.10.12
was the issue.Running
pip install --upgrade pydantic
solves the problem.
This is exactly the solution for me! 🚀 🔥
Fixed in https://github.com/openai/openai-python/releases/tag/v1.2.3
(upgrading pydantic also works)
Fixed in https://github.com/openai/openai-python/releases/tag/v1.2.3
(upgrading pydantic also works)
Outstanding! Cheers.
Expected behavior
The "Example with one function called in parallel" code from the documentation should correctly show how to use the function call feature.
Actual behavior
I get
BadRequestError: Error code: 400 - {'error': {'message': "'content' is a required property - 'messages.1'", 'type': 'invalid_request_error', 'param': None, 'code': None}}
while running the code.Stack trace:
Versions
OpenAI SDK: 1.1.1 Python: 3.10.12