run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
35.89k stars 5.09k forks source link

[Feature Request]: Google Gemini model support via Vertex AI #9607

Closed nicoloboschi closed 9 months ago

nicoloboschi commented 9 months ago

Feature Description

Currently Gemini is available also via Vertex AI API. I'd like to be able to switch from

from llama_index.llms import Vertex
Vertex(model="chat-bison")

to

from llama_index.llms import Vertex
Vertex(model="gemini-pro")

It's also noteworthy that this is already doable in LangChain 0.0.350

from langchain_community.chat_models import ChatVertexAI
ChatVertexAI(model_name="gemini-pro")

Reason

Mid term reason is to being able to switch existing VertexAI application to Gemini models Short term reason is that Gemini API are blocked in many countries (EU-UK for example) but the usage with Vertex is allowed

Value of Feature

No response

dosubot[bot] commented 9 months ago

🤖

Hello @nicoloboschi! 🖐️

I'm Dosu, an AI here to assist you with your issues and queries about LlamaIndex until a human maintainer can step in. I can help you troubleshoot bugs, answer questions, and guide you on how to contribute to the project. Don't worry, I'm here to make your experience smoother!

Looking forward to our interaction!

Thank you for your feature request. I understand that you would like to add support for the Google Gemini model via Vertex AI to the LlamaIndex repository, similar to how it's implemented in LangChain 0.0.350. This would allow users to switch from existing VertexAI applications to Gemini models, especially in regions where the Gemini API is blocked.

From the context provided, it appears that the LlamaIndex repository already supports the Google Gemini model via Vertex AI. The GeminiMultiModal class in the LlamaIndex repository is designed to interface with Google's Gemini multi-modal models.

Here is the relevant code snippet that checks the model support:

# Check whether the Gemini Model is supported or not
if model_name not in GEMINI_MM_MODELS:
    raise ValueError(
        f"Invalid model {model_name}. "
        f"Available models are: {GEMINI_MM_MODELS}"
    )

And here is the code snippet that initializes the Gemini model:

self._model = genai.GenerativeModel(
    model_name=model_name,
    generation_config=final_gen_config,
    safety_settings=safety_settings,
)

So, you should be able to switch from existing VertexAI applications to Gemini models by using this class and its methods. However, if the Gemini API is blocked in certain regions, you might need to use a VPN or proxy to bypass the blockage, as this is a network issue rather than a software one.

If you're still having trouble implementing this feature, could you please provide more details about the issues you're facing? This will help me better understand the problem and provide a more accurate solution.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

hatianzhang commented 9 months ago

@nicoloboschi that is a good point. feel free to make a pr to support. I think you can first try to directly using current Vertex to load gemini pro model to see whether it works

nicoloboschi commented 9 months ago

Hi @hatianzhang I opened a pr https://github.com/run-llama/llama_index/pull/9624 Please review