Closed rkistner closed 1 year ago
Thank you for the first PR!
This work is great!
As you mentioned, I totally agree that changing functions
to tools
is needed.
I'm trying this and it works well for most of conversations, but I often get an error below when I want to control multiple devices like turn of the light of dressroom kitchen bedroom
Sorry, I had a problem talking to OpenAI: The server had an error processing your request.
Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error.
(Please include the request ID 42671951df3e4e55xxxxxxxxx in your email.)
{
"error": {
"message": "The server had an error processing your request. Sorry about that! You can retry your request, or contact us through our help center at help.openai.com if you keep seeing this error. (Please include the request ID 42671951df3e4e55xxxxxxxxx in your email.)",
"type": "server_error",
"param": null,
"code": null
}
}
I'm quite not sure if this is temporary or problem of gpt-3.5-turbo-1106
model.
Let me try this for a couple more days.
One concern is OpenAI compatible servers like LocalAI(see https://github.com/jekalmin/extended_openai_conversation/issues/17).
They will eventually be compatible with OpenAI (I believe), but since tools
just came out, they will probably need some time to catch up (I'm not sure).
If that's the case, should we make an option to choose what (functions
or tools
) to use in the meantime?
execute_service
) vs execute_services
I'm still not certain how much the result would be different, worrying that it might only consume more tokens if result is almost the same. I'm looking forward to see the case.
Other than those two concerns, this is really great! I'm looking forward to hear your thoughts on those.
One concern is OpenAI compatible servers like LocalAI(see https://github.com/jekalmin/extended_openai_conversation/issues/17).
That's a valid concern.
I often get an error below when I want to control multiple devices
I also had the same The server had an error processing your request
error - it appears like it's more common when the assistant would make multiple tool requests in the same message, and it's kinda defeating the purpose of tools if it can't do that.
- Parallel function calling(execute_service) vs execute_services
Yeah, it's mostly the same. I think the main difference right now is that it can combine multiple different function calls into one, instead of only executing services. I didn't check the number of tokens used, but it should be similar.
I also initially thought tools vs functions may help with the "infinite function loop" issue. Turns out it helps with validation (checks that requests and responses match), but I think the the main issue was that the {'role': 'function'}
message was not included in the history sent. This PR contains a bit of a refactoring of message handling, but a simpler fix for just that is here.
Overall, tools might not be ready for proper use yet. Feel free to leave this PR as a starting point if you want to go that route later. Fixing the messages history and error handling for service calls would probably have a bigger impact.
One concern is OpenAI compatible servers like LocalAI(see https://github.com/jekalmin/extended_openai_conversation/issues/17).
Maybe bumping up version to 0.1.x
will be enough. Keep 0.0.x
version for functions and 0.1.x
for tools.
I also had the same The server had an error processing your request error
I will keep an eye on tools and see if this is resolved pretty soon, or look for a way to solve this.
I think the main difference right now is that it can combine multiple different function calls into one
Yeah, if I were changing functions
to tools
, I would have done the same, changing execute_services
to execute_service
to see if it works better. I like the way you fixed, reusing execute_service_single for controlling multiple entities. As you said, this is a really good start!
I also initially thought tools vs functions may help with the "infinite function loop" issue
Hope this is fixed.
I think the the main issue was that the {'role': 'function'} message was not included in the history sent
As you said, I just realized that an example of function calling from documentation adds below message into history. Hopefully this fixes the infinite function loop
issue.
{
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": function_response,
}
As soon as The server had an error processing your request
error is fixed, I will merge this and release with 0.1.x
version with a little bit of change in the message history. Thanks for your work!
If you don't mind, I want to merge this into 0.1.x
branch and release 0.1.0-beta1
version, so I and others can try this conveniently via HACS.
Sure, you're welcome to do that
Function calls are deprecated, "tools" are now recommended. It's basically the same thing, but allows multiple tool calls in a single message.
This now changes the default function to take a single service as argument - preferring to split it into multiple separate tool calls instead. The old form is still supported for existing templates.
I've found tool calls have way less issues with repeating the same call multiple times. This may have been related to the messages previously not including the correct sequence of function_call, function output, but I didn't confirm (tool calls are more strict in requiring exact request -> response sequences).
Additionally, I've changed the output to handle additional validation errors, and return the error text to OpenAI.
I also ran into some cases where a single input message resulted in multiple outputs. Something like this:
I have not been able to reproduce the above consistently, but the output for multiple messages would now be returned to the user if it does happen, instead of leaving gaps in the conversation.
See: