langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
11.9k stars 1.99k forks source link

Acces Gemini using Google Vertex #3884

Closed Seigneurhol closed 5 months ago

Seigneurhol commented 6 months ago

Is it possible to access Gemini using Google Vertex ? I saw it will be implemented (https://github.com/langchain-ai/langchainjs/issues/2562#issuecomment-1855013347) but not sure it has been done.

dosubot[bot] commented 6 months ago

🤖

Hello @Seigneurhol! I'm Dosu, an assistant here to help you with bugs, answer your questions, and guide you in contributing to the langchainjs project. While we're waiting for a human maintainer, feel free to lean on me for assistance. Nice to meet you! :wave:

Yes, the feature to access Gemini using Google Vertex has been implemented in langchainjs. This is evident from the provided context where the Google Vertex AI vector store is being exported from the LangChain community package. Here is the supporting code:

export * from "@langchain/community/vectorstores/googlevertexai";

This line of code is importing all exports from the googlevertexai module in the @langchain/community/vectorstores package, which indicates that the Google Vertex AI functionality is available in the LangChain framework.

The Google Vertex AI vector store in the LangChain framework provides functionalities such as adding documents, adding vectors, deleting documents, performing a similarity search, and determining the public API endpoint. You can find more details about these functionalities in the googlevertexai.ts file.

However, I was unable to find the exact version of langchainjs where this feature was implemented. If you need this information, I would recommend checking the release notes or the commit history of the langchainjs repository. If you have any other questions or need further clarification, feel free to ask.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

afirstenberg commented 6 months ago

I'm working on it! Expecting to have the first PR for it within a day or so. (And hopefully rapid updates afterwards.)

Seigneurhol commented 6 months ago

Thank you for your work ! I'm looking forward to try it :)

axe-me commented 6 months ago

I got errors when trying to use gemini model withe the vertex ai llm model from langchain:

 error: GaxiosError: Gemini cannot be accessed through Vertex Predict/RawPredict API. Please follow https://cloud.google.com/vertex-ai/docs/generative-ai/start/quickstarts/quickstart-multimodal for Gemini usage.
openai:serve:       at Gaxios._request (/Users/axe/Workspaces/mono/node_modules/.pnpm/gaxios@5.1.0/node_modules/gaxios/src/gaxios.ts:158:15)
openai:serve:       at processTicksAndRejections (node:internal/process/task_queues:95:5)
openai:serve:       at JWT.requestAsync (/Users/axe/Workspaces/mono/node_modules/.pnpm/google-auth-library@8.9.0/node_modules/google-auth-library/build/src/auth/oauth2client.js:382:18)
openai:serve:       at GAuthClient.request (file:///Users/axe/Workspaces/mono/node_modules/.pnpm/@langchain+community@0.0.16_google-auth-library@8.9.0_ws@8.14.2/node_modules/@langchain/community/dist/utils/googlevertexai-gauth.js:24:21)
openai:serve:       at RetryOperation._fn (/@fs/Users/axe/Workspaces/mono/node_modules/.pnpm/p-retry@4.6.2/node_modules/p-retry/index.js:50:12) {
openai:serve:     response: {
openai:serve:       config: [Object],
openai:serve:       data: [Object],
openai:serve:       headers: [Object],
openai:serve:       status: 400,
openai:serve:       statusText: 'Bad Request',
openai:serve:       request: [Object]
openai:serve:     },
openai:serve:     config: {
openai:serve:       url: 'https://us-central1-aiplatform.googleapis.com/v1/projects/upbeta/locations/us-central1/publishers/google/models/gemini-pro:predict',
openai:serve:       method: 'POST',
openai:serve:       data: [Object],
openai:serve:       responseType: 'json',
openai:serve:       headers: [Object],
openai:serve:       paramsSerializer: [Function: paramsSerializer],
openai:serve:       body: '{"instances":[{"context":"","examples":[],"messages":[{"author":"user","content":"please list top 50 universities in north american\\n\\n [formatInstructions]"}]}],"parameters":{"temperature":0,"topK":32,"topP":1,"maxOutputTokens":1024}}',
openai:serve:       validateStatus: [Function: validateStatus]
openai:serve:     },
openai:serve:     code: 400,
openai:serve:     errors: [ [Object] ],
openai:serve:     attemptNumber: 1,
openai:serve:     retriesLeft: 6
openai:serve:   },
openai:serve:   durationMs: 379
openai:serve: }

My service account has owner role and all the APIs are enabled in gcp. The URL seems correct, not sure if this is something @afirstenberg 's PR could fix, thanks in advance if the PR fix this.

afirstenberg commented 6 months ago

@axe-me - Yes, PR #3960 adds a new set of classes that can access Gemini through Vertex AI.

(The eventual goal will be to provide backwards compatibility with the current classes, so you can access both the PaLM and Gemini models using the same class, but I'm doing this in a phased approach.)

jarib commented 5 months ago

This is still a problem, since Langchain builds an incorrect URL to access the API for Gemini. This results in the error: GaxiosError: Gemini cannot be accessed through Vertex Predict/RawPredict API.

import { ChatGoogleVertexAI } from '@langchain/community/chat_models/googlevertexai'

const llm = new ChatGoogleVertexAI({ model: 'gemini-pro' })
console.log(await llm.connection.buildUrl())

// => …/locations/us-central1/publishers/google/models/gemini-pro:predict

The problem is the predict method at the end. It should be generateContent for Gemini. Similarily for streaming, serverStreamingPredict must be changed to streamGenerateContent.

I found some code that appears to handle this in in Langchain, but seems it's not used in the ChatGoogleVertexAI class.

It's hard to understand how it all fits together, perhaps @afirstenberg wants to have a look.

Tested with:

    "@langchain/community": "^0.0.27",
    "@langchain/core": "0.1.23"
afirstenberg commented 5 months ago

Sorry this is still confusing @jarib. Things are still in motion. We are (slowly) moving all Google support to the @langchain/google-common package (with the @langchain/google-gauth or @langchain/google-webauth packages), and Gemini support for Vertex AI was the first available.

So you can get this with something like

import { ChatGoogle } from '@langchain/google-gauth'
const llm = new ChatGoogle();

which will use the gemini-pro model by default. If you have the GOOGLE_APPLICATION_CREDENTIALS environment set, or if you're using Google's Application Default Credentials, it will use Vertex AI.

It is on the (very long) short list to retrofit the old classes into the new structure so they'll work correctly, and to retrofit the old models into the new class. But we're not there yet.

(It is also on the very long short list to get full documentation out about how to use these classes. But it keeps being interrupted by Google launching new things.)

jarib commented 5 months ago

Understood, it's definitely a moving target 👍

ChatGoogle solved my case for now, but in my setup I had to use a dynamic import to import the package and avoid this error:

The current file is a CommonJS module whose imports will produce 'require' calls; however, the referenced file is an ECMAScript module and cannot be imported with 'require'. Consider writing a dynamic 'import("@langchain/google-gauth")' call instead.
  To convert this file to an ECMAScript module, change its file extension to '.mts', or add the field `"type": "module"` to

Not sure why that was necessary with this package and not the other langchain packages. In any case, this works for me to access Gemini through Vertex AI:

const { ChatGoogle } = await import('@langchain/google-gauth')
const llm = new ChatGoogle({model: 'gemini-pro'})
punteroo commented 4 months ago

Has there been updates on this being fixed? Google seems to be making quite a mess on how to properly use Gemini haha.

jacoblee93 commented 4 months ago

Not yet - @afirstenberg working on it though!