googleapis / nodejs-vertexai

Apache License 2.0
114 stars 43 forks source link

Using a trained model #374

Closed ktalebian closed 3 weeks ago

ktalebian commented 4 months ago

Hello,

I have created a trained model in Vertex AI Studio. How do I use this library @google-cloud/vertexai to use that model for my predictive generation?

jroell commented 3 months ago

@ktalebian

To use a trained model in Vertex AI Studio with the @google-cloud/vertexai library for predictive generation, you will need to follow these steps:

Step 1: Install the Required Libraries

First, ensure you have the @google-cloud/vertexai library installed:

npm install @google-cloud/vertexai

Step 2: Import and Initialize Vertex AI Client

Import the necessary modules from the @google-cloud/vertexai library and initialize the Vertex AI client with your project details.

Step 3: Load the Model

Load your trained model using the Vertex AI client. You will need the model ID and endpoint details.

Step 4: Make Predictions

Use the model to make predictions by passing the input data to the prediction method.

Here is an example of how you can achieve this:

import { VertexAI } from '@google-cloud/vertexai';

// Initialize the Vertex AI client
const vertexAI = new VertexAI({
  projectId: 'your-project-id',
  location: 'us-central1', // Change to your location
});

// Define the endpoint ID of your deployed model
const endpointId = 'your-endpoint-id';

// Function to make predictions
async function makePrediction(inputData) {
  // Load the endpoint
  const endpoint = vertexAI.endpoint({
    endpoint: endpointId,
  });

  // Define the request payload
  const predictRequest = {
    instances: [
      {
        // Your input data here
        input: inputData,
      },
    ],
  };

  // Make the prediction request
  const [response] = await endpoint.predict(predictRequest);

  // Process the response
  const predictions = response.predictions;
  console.log('Predictions:', predictions);
}

// Example input data for prediction
const inputData = {
  // Replace with your input data structure
  feature1: 'value1',
  feature2: 'value2',
};

// Make a prediction
makePrediction(inputData).catch(console.error);

Step-by-Step Breakdown:

  1. Install the Library: Ensure the @google-cloud/vertexai library is installed in your project.

  2. Initialize the Vertex AI Client: Initialize the client with your project ID and location.

  3. Load the Endpoint: Use the endpoint ID of your deployed model to create an endpoint reference.

  4. Prepare the Prediction Request: Format your input data according to the model's expected input structure.

  5. Make the Prediction: Call the predict method on the endpoint with the prepared input data and process the response.

Additional Notes:

ktalebian commented 3 months ago

@jroell thank you. Does the @google-cloud/vertexai only support auth by perm file? It doesn't support API Key?

hjr3 commented 3 months ago

Did an LLM write this response?

I do not think the endpoint function exists. I cannot find it in the code and when I run this example, I get TypeError: vertexAI.endpoint is not a function.

I also tried

const vertexAI = new VertexAI({
  project: 'project-id',
  location: 'us-central1',
});

const model = vertexAI.preview.getGenerativeModel({
  model: 'numeric-model-id'
});

and get the following error:

[VertexAI.ClientError]: got status: 404 Not Found. {"error":{"code":404,"message":"Publisher Model `projects/{project-id}/locations/us-central1/publishers/google/models/{numeric-model-id}` not found.","status":"NOT_FOUND"}}

I confirmed my project id and numeric model id are correct by making a REST call:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
"https://us-central1-aiplatform.googleapis.com/v1/projects/{project-id}/locations/us-central1/endpoints/{numeric-model-id}:streamGenerateContent" \
-d $'{
  "contents": {
    "role": "user",
    "parts": [
      {
        "text": "What\'s a good name for a flower shop that specializes in selling bouquets of dried flowers?"
      }
    ]
  }
}'

and receiving a valid response.

hjr3 commented 3 months ago

The "Publisher Model" error made me wonder if there was some logic that was assuming that the model being specified was a Google published model.

I read through the code and found the formulateResourcePathFromModel which had this

https://github.com/googleapis/nodejs-vertexai/blob/142ad5a2bce2b875b41265717e09a20f014247e2/src/models/generative_models.ts#L510-L512

So, the correct way to use a tuned model is:

const vertexAI = new VertexAI({
  project: 'project-id',
  location: 'us-central1',
});

const model = vertexAI.preview.getGenerativeModel({
  model: 'projects/{project-id}/locations/us-central1/publishers/google/models/{numeric-model-id}' // <--- must be fully qualified path to model
});
happy-qiao commented 2 months ago

The following code snippet works, no 500 error.

import {
    VertexAI
  } from '@google-cloud/vertexai';

(async function() {
    const project = 'your project';
    const datastoreId = 'your datastore';
    const vertexAI = new VertexAI({
        project: project,
        location: 'us-central1'
    });
    const generativeModel = vertexAI.preview.getGenerativeModel({
        model: 'gemini-1.5-flash-001',
        generationConfig: {
            maxOutputTokens: 8192,
            temperature: 0,
            topP: 1,
        },
        tools: [
            {
                retrieval: {
                    vertexAiSearch: {
                        datastore: `projects/${project}/locations/global/collections/default_collection/dataStores/${datastoreId}`,
                    },
                    disableAttribution: false,
                },
            }
        ]
    });
    const result = await generativeModel.generateContent({
        contents: [{
            role: 'user',
            parts: [{
                text: 'what\'s the weather today?'
            }]
        }]
    });
    console.log(JSON.stringify(result, null, 2));
}
)();

Note: I follow this guide to create my first datastore https://cloud.google.com/generative-ai-app-builder/docs/create-datastore-ingest

yyyu-google commented 3 weeks ago

thank you @jroell for the steps.

and yes @hjr3 is correct. to use your trained model, you need to provide the full path of your trained model. see code comment on tuned models https://github.com/googleapis/nodejs-vertexai/blob/f8a3bdf55b6ee694a8fd41df29bdba54d7f8cdc2/src/models/generative_models.ts#L543

regarding authentication options, see README section on authentication. So this SDK takes dependency on google-auth-library, it accepts all authentication options supported by google-auth-library (code link).

regarding aip key support, we are not supporting api key at this moment yet. We may support it in the future.

I am marking this issue as closed. feel free to comment here if there's still questions.