guardrails-ai / guardrails

Adding guardrails to large language models.
https://www.guardrailsai.com/docs
Apache License 2.0
3.86k stars 289 forks source link

How to use gemini model insted of openai #985

Open Revanthraja opened 1 month ago

Revanthraja commented 1 month ago

Description [Add a description of the feature]

Why is this needed [If you have a concrete use case, add details here.]

Implementation details [If known, describe how this change should be implemented in the codebase]

End result [How should this feature be used?]

dtam commented 1 month ago

Hi @Revanthraja you should be able to use gemini by setting a GEMINI_API_KEY and specifying a supported model parameter.

from guardrails import Guard
import os

os.environ['GEMINI_API_KEY'] = ""
guard = Guard()

result = guard(
    messages=[{"role":"user", "content":"How many moons does Jupiter have?"}],
    model="gemini/gemini-pro"
)

print(f"{result.validated_output}")

Please see the documentation here for more details.

rohitpk commented 1 month ago

Alternatively, if you are using Gemini Models via Vertex API. You can use Litellm

Here is the snippet that I worked with and it worked.

import vertexai 

vertexai.init(project=project_id, location="us-central1")

model = GenerativeModel("gemini-1.5-flash-001")

# Use Google Search for grounding
tool = Tool.from_google_search_retrieval(grounding.GoogleSearchRetrieval(disable_attributon=False))

prompt = "When is the next total solar eclipse in US?"
response = model.generate_content(
    prompt,
    tools=[tool],
    generation_config=GenerationConfig(
        temperature=0.0,
    ),
)

print(response)

The same function parameters can be passed into the Guard constructor and it should work

https://docs.litellm.ai/docs/providers/vertex