Closed Abhay-404 closed 9 months ago
Use litellm
run the local gemini server posing as openai
Note: Export your api key as :
export GEMINI_API_KEY=AIxxxxxx-xxxxxxxxxxxxxxxxxxxxxxx
then
litellm --model gemini/gemini-pro --port 8080 --debug
Edit the .rawdog/config.yaml
accordingly
llm_api_key: AIxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxxxxxx
llm_base_url: http://0.0.0.0:8080
llm_custom_provider: openai
llm_model: gemini-pro
I updated the readme so hopefully it will be easier for future users.
hi @Abhay-404
LiteLLM doesn't support Gemini Pro in litellm completion, so you need to use a proxy mode, which means need to run a server.
So, inspiration this repo I also created another repo here (https://github.com/ancs21/dragala) that supports the Goggle Gemini Pro model by default, Google provides
gemini-pro
model for free so you can try it.Thanks.