Open probablykabari opened 6 days ago
Hi @probablykabari,
We looked into this and have a fix on the way. it will be similar to how it works in the test we wrote:
await client.publishJSON({
api: {
name: "llm",
provider: custom({ token: llmToken }),
analytics: {
name: "helicone",
token: analyticsToken,
baseUrl: "https://groq.helicone.ai/openai",
},
},
body: {
model,
},
callback,
});
Nice! I started making a PR myself but got sidetracked. Though, my approach was making the interface more similar to a custom provider (with a function).
Path: /qstash/integrations/llm
When using a custom LLM provider it doesn't seem like the Helicone integration works. I think this is related to the url being used for completion.
For example, when using GROQ the gateway url should be https://*groq*.helicone.ai/openai/v1 but the url in the Upstash SDK is set to https://*gateway*.helicone.ai/v1. Using it currently will make the LLM request fail.