Open rcarmo opened 1 year ago
Came here to create the exact same issue 😄
Here's the api doc: https://platform.openai.com/docs/guides/gpt/function-calling
Yes we have started testing. One advantage I could see is the JSON output from OpenAI after it processes function's response. It could help reduce the SuperAGI's base prompt.
{
"id": "chatcmpl-123",
...
"choices": [{
"index": 0,
"message": {
"role": "assistant",
"content": "The weather in Boston is currently sunny with a temperature of 22 degrees Celsius.",
},
"finish_reason": "stop"
}]
}
At the same time, the is OpenAI specific and SuperAGI has to support open LLMs too. But yes, we are figuring out the architecture to integrate functions and make it available. You can also try and take a stab at implementing this.
@neelayan7 I'd like to give it a go if you don't mind
Absolutely. Looking forward!
Okay it's still a work in progress, I'm encountering some obstacles and could use some help -> https://github.com/iskandarreza/SuperAGI/tree/openai-api-use-function-call
Basically, I'm not sure how to accurately count the function tokens. Yes, it appears that the functions defined in the functions array counts towards token usage.
superagi-celery-1 | [2023-06-15 20:27:53,579: WARNING/ForkPoolWorker-7] ==================function_tokens======================
superagi-celery-1 | [2023-06-15 20:27:53,581: WARNING/ForkPoolWorker-7] 198
superagi-celery-1 | [2023-06-15 20:27:54,594: INFO/ForkPoolWorker-7] error_code=context_length_exceeded error_message="This model's maximum context length is 4097 tokens. However, you requested 4156 tokens (136 in the messages, 125 in the functions, and 3895 in the completion). Please reduce the length of the messages, functions, or completion." error_param=messages error_type=invalid_request_error message='OpenAI API error received' stream_error=False
That's with two test functions to add decimals or hexadecimal numbers together, just to see what happens. I did a lazy thing and counted the tokens with
function_tokens = TokenCounter.count_text_tokens(json.dumps(test_functions))
print('==================function_tokens======================')
print(function_tokens)
As you can see, it's not an accurate way to count, it overestimates (which isn't necessarily a bad thing) but I feel like there's a better way to do that. Seeking input from others.
CC: @neelayan7
Any timeline for this 👀
I think that the below code can calculate real token with the function calling. But it seems like an ad hoc solution. However I'd like to try and cannot wait using function calling in this app.
refs: https://gist.github.com/CGamesPlay/dd4f108f27e2eec145eedf5c717318f5
FYI: Function calling is now available on gpt-4-0613 and gpt-3.5-turbo-0613, which should make tools a lot more reliable:
https://openai.com/blog/function-calling-and-other-api-updates