Closed AmgadHasan closed 5 months ago
can you link docs and a more complete example?
can you link docs and a more complete example?
@jxnl Sure. Just as a heads up, there are two versions of the gemini API:
Here are the docs for function calling usinf the Google ai dev api
https://ai.google.dev/tutorials/function_calling_python_quickstart
A quick example:
calculator = glm.Tool(
function_declarations=[
glm.FunctionDeclaration(
name='add',
description="Returns the sum of two numbers.",
parameters=glm.Schema(
type=glm.Type.OBJECT,
properties={
'a': glm.Schema(type=glm.Type.NUMBER),
'b': glm.Schema(type=glm.Type.NUMBER)
},
required=['a','b']
)
),
glm.FunctionDeclaration(
name='multiply',
description="Returns the product of two numbers.",
parameters=glm.Schema(
type=glm.Type.OBJECT,
properties={
'a':glm.Schema(type=glm.Type.NUMBER),
'b':glm.Schema(type=glm.Type.NUMBER)
},
required=['a','b']
)
)
])
model = genai.GenerativeModel('gemini-pro', tools=[calculator])
chat = model.start_chat()
response = chat.send_message(
f"What's {a} X {b} ?",
)
response.candidates
[index: 0
content {
parts {
function_call {
name: "multiply"
args {
fields {
key: "b"
value {
number_value: 234234
}
}
fields {
key: "a"
value {
number_value: 2312371
}
}
}
}
}
role: "model"
}
finish_reason: STOP
]
lol WHY WOULD THEY DO THIS
lol WHY WOULD THEY DO THIS
I think they want indie developers to hack with thei googleai dev api while also offering their gcp to their enterprise customers.
Just like openai and azure
@jxnl
So is there any hope for this?
Def hope!
The nuance is that the api is slightly different .chat() and .send_message()
So I want to think about how to give a good matching experience.
Yes, it's definitely annoying. You have to specify the tools when you create the model object, not when you actually send the prompt.
Like why even do this? LLMs are fucking stateless; no point in defining the functions when creating the model object. Unless they're using something completely different from OpenAI.
Okay so after taking a second deep dive in their not-so-great docs, I think there's a different way for doing this:
# Create a client
from google.ai import generativelanguage as glm
client = glm.GenerativeServiceClient(
client_options={'api_key': GOOGLE_API_KEY})
# Create function tools
my_tool = glm.Tool(
function_declarations=[
glm.FunctionDeclaration(
name='add',
description="Returns the sum of two numbers.",
parameters=glm.Schema(
type=glm.Type.OBJECT,
properties={
'a': glm.Schema(type=glm.Type.NUMBER),
'b': glm.Schema(type=glm.Type.NUMBER)
},
required=['a','b']
)
),
glm.FunctionDeclaration(
name='multiply',
description="Returns the product of two numbers.",
parameters=glm.Schema(
type=glm.Type.OBJECT,
properties={
'a':glm.Schema(type=glm.Type.NUMBER),
'b':glm.Schema(type=glm.Type.NUMBER)
},
required=['a','b']
)
)
])
request = {
"model": 'models/gemini-1.0-pro-001',
"contents": [{"parts": [{"text": "Send an email to my friend Oliver wishing them a happt birthday"}], "role": "user"}],
"tools": [my_tool],
}
response = client.generate_content(request=request)
This is somewhat similar to the conventional way of doing function calling using OpenAI's client. @jxnl
@AmgadHasan I maintain https://github.com/braintrustdata/braintrust-proxy which allows you to access gemini models through the OpenAI format. We haven't yet translated the gemini tool call syntax over, but based on your code snippets, my guess is that it is just sending json-schema and should be easy to do.
Want to collaborate on that? Then, you could just set OPENAI_BASE_URL
to "https://braintrustproxy.com/v1"
and it'll work out of the box with instructor
what is the current state on this? happy to contribute!
o work done would love a contrib
Hi!
What's the update on this? :)
Hi!
What's the update on this? :)
I won't be working on this anytime soon, so it'll likley have to be from another contributor unless you go tough litellm or braintrust proxy.
Can we get Gemini to implement itself here? I'm only half-joking BTW lol I read it's got a 1.000.000 token window so it can probably read the whole project in one go, and it understands Elixir...
The new Gemini api introduced support for function calling. You define a set of functions with their expected arguments and you pass them in the tools argument.
Can we add gemini support to instructor so it can be used with gemini pro models and later gemini ultra?