Helicone / helicone

🧊 Open source LLM-Observability Platform for Developers. One-line integration for monitoring, metrics, evals, agent tracing, prompt management, playground, etc. Supports OpenAI SDK, Vercel AI SDK, Anthropic SDK, LiteLLM, LLamaIndex, LangChain, and more. 🍓 YC W23
https://www.helicone.ai
Apache License 2.0
1.6k stars 172 forks source link

Allow proxying to custom LLM APIs #464

Closed alexkreidler closed 4 months ago

alexkreidler commented 1 year ago

Currently, Helicone only allows people to proxy to the following services:

https://github.com/Helicone/helicone/blob/868d3b7e424c938067611ecbd5c8d37459bdc3ff/worker/src/lib/HeliconeProxyRequest/mapper.ts#L209-L214

However there are many other OpenAI compatible-services, and people are building OpenAI interfaces to open-source models like LLama and company, so Helicone could provide metrics without any code modifications.

coreywagehoft commented 11 months ago

I would really like to see this functionality as well so I could use Helicone with something like LocalAI

chitalian commented 10 months ago

Hi! We are adding this super soon! Next week this should be merged in.

@alexkreidler, and @coreywagehoft - @colegottdank is working on this right now actually :)

chingweesze-oursky commented 9 months ago

Hi. May I know if PaLM2 support was added or not? Or its already in some branch pending merge to main?

leandrosilvaferreira commented 4 months ago

This was finished ??? I need to track my users tokens consuming.

colegottdank commented 4 months ago

@leandrosilvaferreira, hi, you can use our gateway integration! It allows any target URL. We maintain a whitelist of providers and if not apart of that whitelist, we allow up to 10k requests per day.

If your desired provider is not there, if valid, we can add it.