Portkey-AI / portkey-node-sdk

Build reliable, secure, and production-ready AI apps easily.
https://portkey.ai/docs
22 stars 8 forks source link

[Bug]: Issue with LangChain JS and virtual key : “Incorrect API key provided” #687 #134

Closed Mathieu-R closed 3 days ago

Mathieu-R commented 3 days ago

Contact Details

No response

What happened?

What Happened?

I'm trying to use PortKey with LangChainJS. As I understand I can create a virtual key for each LLM provider I wanna use and pass this key to ChatOpenAI function from LangChain.

import { createHeaders, PORTKEY_GATEWAY_URL } from "portkey-ai"

const portKeyOpenAIConf = {
  baseUrl: PORTKEY_GATEWAY_URL,
  defaultHeaders: createHeaders({
    apiKey: env.get('PORTKEY_KEY'),
    virtualKey: env.get('OPENAI_VKEY'),
  }),
}

const portKeyMistralConf = {
  baseUrl: PORTKEY_GATEWAY_URL,
  defaultHeaders: createHeaders({
    apiKey: env.get('PORTKEY_KEY'),
    virtualKey: env.get('MISTRAL_VKEY'),
  }),
}

export function getModel(modelName: IOpenAiModels | IMistralModels) {
  if (modelName.includes('gpt')) {
    return new ChatOpenAI({
      ...commonChatOptions,
      apiKey: 'X'
      model: modelName,
      configuration: portKeyOpenAIConf,
    })
  }

  if (modelName.includes('mistral')) {
    return new ChatOpenAI({
      ...commonChatOptions,
      apiKey: 'X',
      model: modelName,
      configuration: portKeyMistralConf,
    })
  }

  throw new Error('LLM model unsupported...')
}

The model is then passed to LangChain. However I get the following error : “Incorrect API key provided: X. You can find your API key at https://platform.openai.com/account/api-keys.”. Moreover, it doesn't work with other models than OpenAI. Finally, if I simply use the OPENAI api key and not the virtual key, the request works but nothing is logged in the PortKey dashboard.

I tried a simple request with the OpenAI node library and it works, the request gets logged in PortKey, so my PortKey api key is correct.

What Should Have Happened?

This should pass request to the gateway and log the LLM call in the PortKey dashboard

Relevant Code Snippet

No response

Your Twitter/LinkedIn

https://www.linkedin.com/in/mathieu-rousseau-929044150/

Version

0.1.xx (Default)

Relevant log output

No response

Code of Conduct

vrushankportkey commented 3 days ago

@Mathieu-R is the same ChatOpenAI model that you created here passed to Langchain?

Mathieu-R commented 3 days ago

@Mathieu-R is the same ChatOpenAI model that you created here passed to Langchain?

Yes, you can even try with the following simple example :

import { ChatOpenAI } from "@langchain/openai";
import { createHeaders, PORTKEY_GATEWAY_URL } from "portkey-ai";

const portKeyConf = {
    baseUrl: PORTKEY_GATEWAY_URL,
    defaultHeaders: createHeaders({
        apiKey: PORTKEY_KEY,
        VirtualKey: OPENAI_VKEY
    })
}

const chat = new ChatOpenAI({
    apiKey: "X",
    model: "gpt-3.5-turbo",
    configuration: portKeyConf
})

await chat.invoke("What is the meaning of life, universe and everything? Answer in a few words.")

I get the following error

AuthenticationError: 401 Incorrect API key provided: X. You can find your API key at https://platform.openai.com/account/api-keys.
csgulati09 commented 3 days ago

Hey! I was just looking into the issue.

See this implementation of the portKeyConf

const portKeyConf = {
    baseURL: PORTKEY_GATEWAY_URL,
    defaultHeaders: createHeaders({
        apiKey: <PORTKEY_API_KEY>,
        virtualKey: <OPENAI_V_KEY>,
        provider: "openai"
    })
}

4 points to notice here:

  1. We need to set provider as well, in this case provider: "openai"
  2. The key has to be baseURL and not baseUrl
  3. The key has to be virtualKey and not VirtualKey
  4. If you are using virtualKey, you may not use provider key in the config object.

I hope this solves the issue, if not, feel free to message us. Happy to help. :)

Mathieu-R commented 3 days ago

Hey! I was just looking into the issue.

See this implementation of the portKeyConf

const portKeyConf = {
    baseURL: PORTKEY_GATEWAY_URL,
    defaultHeaders: createHeaders({
        apiKey: <PORTKEY_API_KEY>,
        virtualKey: <OPENAI_V_KEY>,
        provider: "openai"
    })
}

4 points to notice here:

1. We need to set provider as well, in this case `provider: "openai"`

2. The key has to be `baseURL` and not `baseUrl`

3. The key has to be `virtualKey` and not `VirtualKey`

4. If you are using virtualKey, you may not use provider key in the config object.

I hope this solves the issue, if not, feel free to message us. Happy to help. :)

Yes it works, sorry for the inconvenience, I misread the doc. Thanks !