run-llama / LlamaIndexTS

Data framework for your LLM applications. Focus on server side solution
https://ts.llamaindex.ai
MIT License
1.96k stars 365 forks source link

Question: usage of Azure OpenAI backend [SOLVED] #432

Closed synergiator closed 8 months ago

synergiator commented 10 months ago

From the examples/docs, it's not quite clear how to configure the code for use with a model deployment in Azure OpenAI.

e.g. https://github.com/run-llama/LlamaIndexTS/blob/main/examples/vectorIndex.ts

On running, it tries to connect to the default OpenAI configuration. In the codebase, I see a class implementing Azure OpenAI backend, but in the example code the use of backend seems to be implicit.

I have set the Azure endpoint and key, which seem to get recognized, but get an Azure error regarding missing name of the deployment.

In the Python examples, there are more specific constructs:


llm = AzureOpenAI(
    model="gpt-35-turbo-16k",
    deployment_name="my-custom-llm",
    api_key=api_key,
    azure_endpoint=azure_endpoint,
    api_version=api_version,
)
synergiator commented 10 months ago

@himself65 any ideas.??

synergiator commented 10 months ago

OK I've got my head around it. The logic is here to wrap embed configuration in the GPT configuration.

The following example works for me (lllamaindexTS 0.0.48 and node 21.6.0). Maybe this should go to the examples collection.

import fs from "node:fs/promises";
import { Document, VectorStoreIndex, serviceContextFromDefaults, OpenAI, OpenAIEmbedding } from "llamaindex";

async function main() {
  // Set Azure OpenAI environment variables to trigger Azure OpenAI logic in llamaindex!
  // in the environment, you need only endpoint and API key
  // AZURE_OPENAI_ENDPOINT
  // AZURE_OPENAI_API_KEY
  // your custom deployment names can be sourced from elsewhere

  // Load essay from abramov.txt in Node
  const path = "node_modules/llamaindex/examples/abramov.txt";

  const essay = await fs.readFile(path, "utf-8");

  const embedding = new OpenAIEmbedding( {azure: {deploymentName : "<YOUR_EMBEDDING_DEPLOYMENT_NAME>"}})
  const llm = new OpenAI( {azure: {deploymentName: "<YOUR_GPT_DEPLOYMENT_NAME>"}})

  // Create Document object with essay
  const document = new Document({ text: essay, id_: path });

  // Split text and create embeddings. Store them in a VectorStoreIndex
  const serviceContext = serviceContextFromDefaults({
    llm: llm,
    embedModel: embedding
  });

  const index = await VectorStoreIndex.fromDocuments([document], {
    serviceContext
  });

  // Query the index  
  const queryEngine = index.asQueryEngine();
  const response = await queryEngine.query({
  query: "What did the author do in college?",
  });

  // Output response
  console.log(response.toString());
}

main().catch(console.error)
urshri31 commented 9 months ago

I need openai usage , but still not getting in response of above code

urshri31 commented 9 months ago

It's available in https://github.com/run-llama/LlamaIndexTS/tree/llm_usage branch . When this will be available inhttps://www.npmjs.com/package/llamaindex and in which version?

marcusschiesser commented 9 months ago

@urshri31 thanks for the reminder, i added it too our backlog