langchain-ai / langchainjs

🦜🔗 Build context-aware reasoning applications 🦜🔗
https://js.langchain.com/docs/
MIT License
12.57k stars 2.15k forks source link

Azure Open API Support: available in Python version but not in this package #601

Closed FirozKuraishi closed 1 year ago

FirozKuraishi commented 1 year ago

Support is available in python package as from langchain.llms import AzureOpenAI

and llm = AzureOpenAI(deployment_name="text-davinci-003", model_name="text-davinci-003")

but ther is no option to define the same here.

nfcampos commented 1 year ago

Hi, yes sadly as far as I know the node sdk for OpenAI doesn't offer support for Azure out of the box, see https://github.com/openai/openai-node/issues/53

See also this discussion https://github.com/hwchase17/langchainjs/issues/244

FirozKuraishi commented 1 year ago

@nfcampos I used https://github.com/hwchase17/langchainjs/issues/244 to create a sample application which worked fine but don't know how to integrate Langchain using this as when trying to use const chain = new LLMChain({ llm: model, prompt }); and passing model created using const { Configuration, OpenAIApi } = require("azure-openai"); model= new OpenAIApi(configuration); it's giving error as model not supported.

Any suggestion/workaround to use to pass this model to llm chain

Elvincth commented 1 year ago

+1 for this, any update? Maybe supporting the azure openai package ?

nfcampos commented 1 year ago

@Elvincth i didn't know about that package, thanks for sharing. I can open a PR with this, but I'm not able to test it, hopefully one of you can?

Elvincth commented 1 year ago

@Elvincth i didn't know about that package, thanks for sharing. I can open a PR with this, but I'm not able to test it, hopefully one of you can?

Yes, sure I can help you test it.

firoz-qureshi commented 1 year ago

@Elvincth @nfcampos I was able to use Azure OpenAI after doing some modification with current langchain llms. I did the following changes :

1. Added AzureLLM class at src/llms package.

import { CallbackManager } from 'callbacks/base.js'; import { chunkArray } from "../util/chunk.js"; import { BaseLLM } from "./base.js"; import { calculateMaxTokens } from "../base_language/count_tokens.js"; import { LLMResult } from "../schema/index.js"; import { TiktokenModel } from '@dqbd/tiktoken';

const promptToAzureArgs = ({ prompt, temperature, stop, maxTokens, }: { prompt: string[] temperature: number stop: string[] | string | undefined maxTokens: number }): LLMPromptArgs => ({ prompt, temperature, max_tokens: maxTokens, stop, });

export class AzureLLM extends BaseLLM { name = 'AzureLLM';

batchSize = 20;

temperature: number;

concurrency?: number;

key: string;

endpoint: string;

modelName: TiktokenModel;

constructor(fields?: { callbackManager?: CallbackManager concurrency?: number cache?: boolean verbose?: boolean temperature?: number key?: string endpoint?: string, modelName?: string }) { super({ ...fields }); this.temperature = fields?.temperature === undefined ? 0.7 : fields?.temperature;

const apiKey = process.env.AZURE_LLM_KEY || fields?.key;
if (!apiKey) {
  throw new Error('Azure key not provided. Either set AZURE_LLM_KEY in your .env file or pass it in as a field to the constructor.');
}
this.key = apiKey;

const endpoint = process.env.AZURE_LLM_ENDPOINT || fields?.endpoint;
if (!endpoint) {
  throw new Error(
    'Azure endpoint not provided. Either set AZURE_LLM_ENDPOINT in your .env file or pass it in as a field to the constructor.'
  );
}
this.endpoint = endpoint;

}

async _generate(prompts: string[], stop?: string[] | undefined): Promise { const subPrompts = chunkArray(prompts, this.batchSize); const choices: Choice[] = [];

for (const element of subPrompts) {
  const prompts = element;
  const maxTokens = await calculateMaxTokens({
    prompt: prompts[0],
    modelName: this.modelName,
  });
  const args = promptToAzureArgs({ prompt: prompts, temperature: this.temperature, stop, maxTokens });

  const data = await this._callAzure(args);

  choices.push(...data.choices);
}

// *sigh* I have 1 for chunks just so it'll work like the example code
const generations = chunkArray(choices, 1).map((promptChoices) =>
  promptChoices.map((choice) => ({
    text: choice.text ?? '',
    generationInfo: {
      finishReason: choice.finish_reason,
      logprobs: choice.logprobs,
    },
  }))
);

return {
  generations,
};

}

private async _callAzure(args: LLMPromptArgs): Promise { const headers = { 'Content-Type': 'application/json', 'api-key': this.key };

const response = await fetch(this.endpoint, {
  method: 'POST',
  headers,
  body: JSON.stringify(args),
});

if (!response.ok) {
  const text = await response.text();
  console.error('Azure request failed', text);
  throw new Error(`Azure request failed with status ${response.status}`);
}

const json = await response.json();

return json;

}

_llmType(): string { return this.name; } }

// From Langchain

type LLMPromptArgs = { prompt: string[] | string max_tokens?: number temperature?: number top_p?: number n?: number stream?: boolean logprobs?: number frequency_penalty?: number presence_penalty?: number stop?: string[] | string best_of?: number logit_bias?: unknown }

type Choice = { text: string index: number logprobs: unknown finish_reason: string }

type LLMResponse = { id: string object: string created: number model: string choices: Choice[] }

2. Added export in index.ts at src/llms/index.ts export { AzureLLM } from "./azure_llm.js";

3. Build this using yarn build command and initialized this in my app.ts file const llm = new AzureLLM({ temperature: 0.1, verbose: true, key: process.env.AZURE_OPENAI_API_KEY, cache: true, modelName: process.env.modelName, endpoint: process.env.AZURE_LLM_ENDPOINT, }); Note: endpoint will be complete URL e.g.

https://{hostname}.openai.azure.com/openai/deployments/{deploymentName}/completions?api-version=2022-12-01

but it would be nice if azure-openai package support available in current langchain package

Elvincth commented 1 year ago

@Elvincth @nfcampos I was able to use Azure OpenAI after doing some modification with current langchain llms. I did the following changes :

1. Added AzureLLM class at src/llms package.

import { CallbackManager } from 'callbacks/base.js'; import { chunkArray } from "../util/chunk.js"; import { BaseLLM } from "./base.js"; import { calculateMaxTokens } from "../base_language/count_tokens.js"; import { LLMResult } from "../schema/index.js"; import { TiktokenModel } from '@dqbd/tiktoken';

const promptToAzureArgs = ({ prompt, temperature, stop, maxTokens, }: { prompt: string[] temperature: number stop: string[] | string | undefined maxTokens: number }): LLMPromptArgs => ({ prompt, temperature, max_tokens: maxTokens, stop, });

export class AzureLLM extends BaseLLM { name = 'AzureLLM';

batchSize = 20;

temperature: number;

concurrency?: number;

key: string;

endpoint: string;

modelName: TiktokenModel;

constructor(fields?: { callbackManager?: CallbackManager concurrency?: number cache?: boolean verbose?: boolean temperature?: number key?: string endpoint?: string, modelName?: string }) { super({ ...fields }); this.temperature = fields?.temperature === undefined ? 0.7 : fields?.temperature;

const apiKey = process.env.AZURE_LLM_KEY || fields?.key;
if (!apiKey) {
  throw new Error('Azure key not provided. Either set AZURE_LLM_KEY in your .env file or pass it in as a field to the constructor.');
}
this.key = apiKey;

const endpoint = process.env.AZURE_LLM_ENDPOINT || fields?.endpoint;
if (!endpoint) {
  throw new Error(
    'Azure endpoint not provided. Either set AZURE_LLM_ENDPOINT in your .env file or pass it in as a field to the constructor.'
  );
}
this.endpoint = endpoint;

}

async _generate(prompts: string[], stop?: string[] | undefined): Promise { const subPrompts = chunkArray(prompts, this.batchSize); const choices: Choice[] = [];

for (const element of subPrompts) {
  const prompts = element;
  const maxTokens = await calculateMaxTokens({
    prompt: prompts[0],
    modelName: this.modelName,
  });
  const args = promptToAzureArgs({ prompt: prompts, temperature: this.temperature, stop, maxTokens });

  const data = await this._callAzure(args);

  choices.push(...data.choices);
}

// *sigh* I have 1 for chunks just so it'll work like the example code
const generations = chunkArray(choices, 1).map((promptChoices) =>
  promptChoices.map((choice) => ({
    text: choice.text ?? '',
    generationInfo: {
      finishReason: choice.finish_reason,
      logprobs: choice.logprobs,
    },
  }))
);

return {
  generations,
};

}

private async _callAzure(args: LLMPromptArgs): Promise { const headers = { 'Content-Type': 'application/json', 'api-key': this.key };

const response = await fetch(this.endpoint, {
  method: 'POST',
  headers,
  body: JSON.stringify(args),
});

if (!response.ok) {
  const text = await response.text();
  console.error('Azure request failed', text);
  throw new Error(`Azure request failed with status ${response.status}`);
}

const json = await response.json();

return json;

}

_llmType(): string { return this.name; } }

// From Langchain

type LLMPromptArgs = { prompt: string[] | string max_tokens?: number temperature?: number top_p?: number n?: number stream?: boolean logprobs?: number frequency_penalty?: number presence_penalty?: number stop?: string[] | string best_of?: number logit_bias?: unknown }

type Choice = { text: string index: number logprobs: unknown finish_reason: string }

type LLMResponse = { id: string object: string created: number model: string choices: Choice[] }

2. Added export in index.ts at src/llms/index.ts export { AzureLLM } from "./azure_llm.js";

3. Build this using yarn build command and initialized this in my app.ts file const llm = new AzureLLM({ temperature: 0.1, verbose: true, key: process.env.AZURE_OPENAI_API_KEY, cache: true, modelName: process.env.modelName, endpoint: process.env.AZURE_LLM_ENDPOINT, }); Note: endpoint will be complete URL e.g.

https://{hostname}.openai.azure.com/openai/deployments/{deploymentName}/completions?api-version=2022-12-01

but it would be nice if azure-openai package support available in current langchain package

Thanks for sharing!

kunjan13 commented 1 year ago

does this work? will this be part of next release?

FirozKuraishi commented 1 year ago

@nfcampos @kunjan13 it's working fine, I will raise a PR for this.

nfcampos commented 1 year ago

Thanks, Im happy to review. It might make sense to extend OpenAI class rather than BaseLLM, as I assume almost everything is shared. You can also create a ChatAzureOpenAI class, so that people can use the chatgpt and gpt-4 models, or I can do that in a follow up PR

FirozKuraishi commented 1 year ago

@nfcampos I am getting 403 when pushing changes into feature branch, could you please give me access?

nfcampos commented 1 year ago

You need to make a fork and push your changes there. Then open a PR

On Thu, Apr 20 2023 at 1:35 pm, FirozKuraishi @.***> wrote:

@nfcampos https://github.com/nfcampos I am getting 403 when pushing changes into feature branch, could you please give me access.

— Reply to this email directly, view it on GitHub https://github.com/hwchase17/langchainjs/issues/601#issuecomment-1516252377, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAN4RTMLLLLSYXPQWPQWHLXCEUS7ANCNFSM6AAAAAAWSPF32A . You are receiving this because you were mentioned.Message ID: @.***>

FirozKuraishi commented 1 year ago

@nfcampos pull request raised: https://github.com/hwchase17/langchainjs/pull/910

dersia commented 1 year ago

Great work @FirozKuraishi!

I am just not sure if this much is needed. Azure open ai works just fine with the Openai node sdk. It just needs some headers applied, so I am not sure if it makes sense to add another llm to langchain, if in the end it is the same llm, just a different Endpoint. This works for both llms, chat and completion.

So I would rather prefer a implementation based on the existing openai sdk, for what it's worth.

I am also happy to help with a PR or create a PR if wanted.

Again thanks for the PR 😊

FirozKuraishi commented 1 year ago

@dersia azure open ai works fine but not with langchain, I also tried that and that works fine with node but when tried to integrate the same with langchain I was not able to connect the same as inside the openai llm model it adds /completion at the end of the uri and does not support to pass query parameters with your azure endpoint so you will get 404 error every time as azure api expects version number as a query param at the end of uri.

I would appreciate if you can provide some working example code with azure open ai and langchain integration.

dersia commented 1 year ago

@FirozKuraishi maybe I wasn't clear about what I meant. There is no way to get langchainjs running on Azure OpenAI without changes, but I think it makes more sense, to make the changes to the already implemented Models instead of creating a new Azure Model, because this would mean that we also have to maintain also the Azure models. Since both APIs, Azure and OpenAI are the same, just called a bit differently, it makes sense to change the models that are already implemented.

In general, it would be sufficient to just remove the check for if the openAIApiKey is set or not. Then we can call Azure's endpoint by setting the basePath, and baseOptions. But I think we can do better than that.

I have made the changes needed and also adjusted the docs and created a PR. So far it looks good and works, but I'd love for someone else to also checkout my changes and give me some feedback. I also would like to take the time to adjust the unit tests and add unittest for the azure one. I am planning on doing this tomorrow.

You can find my PR here: https://github.com/hwchase17/langchainjs/pull/966

Hope this is ok, again I appreciate your work and did not mean to undermine your work!