Closed wmtdru8xip closed 3 weeks ago
@anmolsingh95 @JelleZijlstra
OpenAI's documentation at https://platform.openai.com/docs/api-reference/chat/create and https://platform.openai.com/docs/guides/function-calling provides much valuable information.
Another minor issue is that, any valid JSON schemas should be valid as tools[].function.parameters
, rather than exclusively objects, which are currently the only type supported by Poe's API.
tools_dict_list = [
{
"type": "function",
"function": {
"name": "evaluate_expression",
"description": "Evaluate numeric expression",
"parameters": {
"type": "string",
"description": "The expression",
},
},
},
]
Hello @wmtdru8xip
Thanks for the detailed feedback post. My read of OpenAI's documentation was that the "tool_choice" is set to "auto" by default but just pushed a change being super explicit about it.
Re:
"Using the underlying OpenAI API directly, the model operates in a sequential and context-aware manner, calling functions as needed and presenting accurate data to the user",
Could you please provide the equivalent code that uses OpenAI's library?
from openai import OpenAI
import json
client = OpenAI()
from asteval import Interpreter
aeval = Interpreter()
magic = ["142 * 4 + 294", "viudz117trlwubo", "ct89zhrrv6x0xmc"]
def get_magic_data_index(index, expression = None):
return json.dumps({"magic": magic[index]})
def evaluate_expression(expression, index = None):
return aeval(expression)
def run_conversation(prompt):
# Tool definition
tools = [
{
"type": "function",
"function": {
"name": "get_magic_data_index",
"description": "Get the Magic Data numbered by an index",
"parameters": {
"type": "object",
"properties": {
"index": {
"type": "number",
"description": "Index of the Magic Data to fetch",
}
},
"required": ["index"],
},
},
},
{
"type": "function",
"function": {
"name": "evaluate_expression",
"description": "Evaluate numeric expression",
"parameters": {
"type": "object",
"properties": {
"expression": {
"type": "string",
"description": "The expression",
}
},
"required": ["expression"],
},
},
}
]
# Message array
messages = [{"role": "user", "content": prompt}]
# Loop until the message has stopped naturally
while True:
# Begin or continue message generation
response = client.chat.completions.create(
model="gpt-4-1106-preview",
messages=messages,
tools=tools,
tool_choice="auto",
)
response_message = response.choices[0].message
messages.append(response_message)
# Stop when the model has completed its message
if response.choices[0].finish_reason == "stop":
break
tool_calls = response_message.tool_calls
# When tools are called
if tool_calls:
available_functions = {
"get_magic_data_index": get_magic_data_index,
"evaluate_expression": evaluate_expression,
}
for tool_call in tool_calls:
# Get tool results
function_name = tool_call.function.name
function_to_call = available_functions[function_name]
function_args = json.loads(tool_call.function.arguments)
function_response = function_to_call(
index=function_args.get("index"),
expression=function_args.get("expression"),
)
# Send results back to the model
messages.append(
{
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": function_response,
}
)
# The completed conversation
return messages
prompt = """
1. Please tell me the Magic Data #0.
2. It is going to be a mathematical expression. Tell me how much is it.
3. If it is greater than 1000, tell me the Magic Data #1
4. Otherwise, tell me the Magic Data #2
"""
print(run_conversation(prompt))
I just experimented with Poe's API, and now it seems that the model can indeed choose the functions to call; however, subsequent calls are still blocked.
@anmolsingh95 @JelleZijlstra
Hi @wmtdru8xip. In the code example you gave, it seems like you are managing the execution of functions and it's not being handled by the OpenAI API itself. In fastapi_poe, stream_request
offers a default implementation that is supposed to be a reasonable and simple default. If you want more control, you can use "stream_request_base" where you are responsible for parsing the OpenAI response and calling the functions manually (as is done by your code example).
You can basically copy/paste this code and modify it according to your needs: https://github.com/poe-platform/fastapi_poe/blob/main/src/fastapi_poe/client.py#L310
Please let me know if you think I'm missing something.
Hi @wmtdru8xip. This has potentially been fixed in the latest release (0.0.36). Can you please give it a shot now: https://creator.poe.com/docs/using-openai-function-calling
Issue Summary: The current implementation of function calling within Poe's API appears to be disorganized and overly restrictive. The expectation from a developer's perspective is for the model to have the discretion to determine the sequence and timing of function calls in response to the task it is performing. This dynamic approach is well-supported by OpenAI's API, setting a precedent for how such interactions should typically function.
Upon inspecting the API behavior more closely, I've observed the following issues:
Preemptive Function Calls: The API module appears to force the language model to call all provided functions at the onset of processing each message. This rigid approach negates the model's ability to decide when each function should be called based on the context of the conversation.
Restriction of Later Calls: Moreover, once the initial call is made, the model is restricted from invoking any function calls at a later stage in the message processing. This limitation is counterintuitive to the expected behavior of a conversational model that may require additional information as the dialogue progresses.
Demonstration of the Fault: Below is a simple example that illustrates the issue:
Using the underlying OpenAI API directly, the model operates in a sequential and context-aware manner, calling functions as needed and presenting accurate data to the user:
get_magic_data_index
when needed and presents the retrieved expression to the user.evaluate_expression
.get_magic_data_index
to retrieve and present the corresponding magic data.Conversely, the Poe API induces a sequence of calls regardless of necessity, leading to the model fabricating data to fulfill the premature function call and ultimately resulting in an API crash when an additional call is attempted.
get_magic_data_index
andevaluate_expression
regardless of context.get_magic_data_index
result in an API crash, halting the process.Suggested Improvements: Given these findings, I would recommend the following enhancements to align Poe's API functionality with the expected dynamic behavior:
Revise Function Invocation Logic: Reconfigure the API to allow the model to invoke functions dynamically, as the conversation context requires. This could be achieved by adjusting the server-side implementation to call OpenAI's API with the
tool_choice
parameter set to"auto"
rather than deliberately forcing a list of all available functions upon the model.Enhance Error Handling: Implement more robust error handling to prevent crashes when the model attempts to call functions based on the conversational flow.