Open rgbkrk opened 8 months ago
Thanks for raising this @rgbkrk and the detailed example!
We made this design choice thinking it might be better this way especially in those scenarios where you might want to exec()
the code. Though you are right that it would be nice to even have the outputs OAI compatible. Will try to get to it tomorrow!
Greetings @ShishirPatil ! Any update on this?
I'm also curious about this design decision, if you care to explain the thought process. I know I'm not too keen on letting an LLM run arbitrary code via exec()
, and the function_call
-like response makes it easy to control which functions are being called, validate arguments, etc. without adding too much coding overhead. I'm just wondering if there's some aspect to this I'm missing or if there's another approach you'd suggest.
Thanks for raising this @rgbkrk and the detailed example! We made this design choice thinking it might be better this way especially in those scenarios where you might want to
exec()
the code. Though you are right that it would be nice to even have the outputs OAI compatible. Will try to get to it tomorrow!
Hello, do you have any news @ShishirPatil about this updateing? Thanks!
Importantly for me, I want the functions to be able to run in any language because there are times when I do server side python, rust, or javascript. The agents are basically the same across and I give them the same description of tools. The implementation changes based on the environment but the manifest remains the same.
Any updates on this? Or maybe recommendations for alternative approaches?
I definitely appreciate this project for what it offers, but the function_call
-like responses just seems like the more obvious approach to take, so I'm guessing there's a project somewhere which is comparable to gorilla but does it this way.
Alternatively, I would appreciate hearing the reasoning for the approach this project has taken instead. I'd assume there's a technical reason for it, so maybe an explanation would change my mind on this approach altogether.
For the moment, I recommend using the functionary
model: https://github.com/MeetKai/functionary
You can run it on Linux GPUs with vLLM and on Macs with the Llama Cpp Inference setup.
Is the feature request related to a problem?
With OpenAI, the function calling messages look like this:
Gorilla puts the function call in
content
to be parsed instead:Describe the solution you'd like
Instead of getting back an assistant message with content I have to parse, I'd love to get the
function_call
object back withname
andarguments
to work with.Additional context