upstash / docs

Upstash documentation
https://upstash.com/docs/
9 stars 44 forks source link

Issue with Helicone and custom LLMs #313

Open probablykabari opened 6 days ago

probablykabari commented 6 days ago

Path: /qstash/integrations/llm

When using a custom LLM provider it doesn't seem like the Helicone integration works. I think this is related to the url being used for completion.

For example, when using GROQ the gateway url should be https://*groq*.helicone.ai/openai/v1 but the url in the Upstash SDK is set to https://*gateway*.helicone.ai/v1. Using it currently will make the LLM request fail.

CahidArda commented 2 days ago

Hi @probablykabari,

We looked into this and have a fix on the way. it will be similar to how it works in the test we wrote:

await client.publishJSON({
  api: {
    name: "llm",
    provider: custom({ token: llmToken }),
    analytics: {
      name: "helicone",
      token: analyticsToken,
      baseUrl: "https://groq.helicone.ai/openai",
    },
  },
  body: {
    model,
  },
  callback,
});
probablykabari commented 1 day ago

Nice! I started making a PR myself but got sidetracked. Though, my approach was making the interface more similar to a custom provider (with a function).