BerriAI / litellm

Python SDK, Proxy Server to call 100+ LLM APIs using the OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.19k stars 1.42k forks source link

[Feature]: Access fine-tuned Gemini via the Google AI Studio adapter #2637

Open twardoch opened 5 months ago

twardoch commented 5 months ago

The Feature

Dnsure that we can access our fine-tuned Gemini via the Google AI Studio adapter. Haven't tested it yet.

Motivation, pitch

You can find-tune Google Gemini Pro 1.0 with your own data and they do inference with the Google AI Studio — for FREE?

On 19 March 2024, Google added the ability to fine-tune their Gemini Pro 1.0 model, and as far as I can tell, you can do it and then do inference for free (for now, I guess, and with the caveat that they're using your data if you use the free offering).

This is an incredible offer to get into fine-tuning and especially experiment with RAG vs. fine-tuning vs. a combo of those, with very little effort.

https://developers.googleblog.com/2024/03/tune-gemini-pro-in-google-ai-studio-or-gemini-api.html?m=1

Twitter / LinkedIn details

@adamtwar

myudak commented 1 month ago

how to use the finetuned model tho

the-wdr commented 2 days ago

+1 There seems to be no solution to accessing the fine tuned Models throw the gemini-Api.