Closed meetpateltech closed 1 month ago
It's already almost fully supported, it's the same context window as 1.5-pro as far as I can tell, and it works on both vertex/google AI studio
Added the following to .env.example
# Gemini API (google AI studio)
# GOOGLE_MODELS=gemini-1.5-flash-latest
# Vertex AI
# GOOGLE_MODELS=gemini-1.5-flash-preview-0514
Only thing I'm unsure about is the pricing.
Google seems to keep going back and forth on this to attract developers.
After this is merged, you shouldn't have any issues using flash models:
It's a multimodal model, so it should be added to vision capable model list
It's a multimodal model, so it should be added to vision capable model list
it's already added by partial match to gemini-1.5
I figured it out then, in OpenRouter it's gemini-flash-1.5 for some reason.
What features would you like to see added?
At Google I/O, a new model called Gemini 1.5 Flash (a lightweight model optimized for speed and efficiency) was introduced. I would like to add support for this model in LibreChat so we can easily use it through the Google AI Studio API
More details
Gemini 1.5 Flash is available in public preview with a 1 million token context window in Google AI Studio and Vertex AI.
more detailed information available here: https://deepmind.google/technologies/gemini/flash/
Which components are impacted by your request?
No response
Pictures
No response
Code of Conduct