Closed tgyy1995 closed 6 months ago
you'll need to store your credentials like this
I can use Gemini in this way:
from litellm import completion
import os
# auth: run 'gcloud auth application-default'
os.environ["VERTEX_PROJECT"] = "hardy-device-386718"
os.environ["VERTEX_LOCATION"] = "us-central1"
response = completion(
model="chat-bison",
messages=[{ "content": "Hello, how are you?","role": "user"}]
)
But I want to use Gemini in this way and i dont kown how. For example, litellm --model huggingface/bigcode/starcoder
.
How should I set up VERTEX_PROJECT and VERTEX_LOCATION?
Do this in your terminal before running the liteLLM command.
export VERTEX_PROJECT="hardy-project"
export VERTEX_LOCATION="us-west"
Thank you.
@tgyy1995 added docs on how to do this too: https://docs.litellm.ai/docs/providers/vertex#gemini-pro
@tgyy1995 what are you using the LiteLLM proxy for ?
Thank you for your work. I can now use Gemini. I use Litellm for autogen.
Hey @tgyy1995, do you have vertex ai credentials stored in your server?