Arize-ai / phoenix

AI Observability & Evaluation
https://docs.arize.com/phoenix
Other
3.53k stars 263 forks source link

Use locally deployed Llm for evaluation. #3269

Closed Talhamuhammadali closed 3 days ago

Talhamuhammadali commented 4 months ago

I need to use locally deployed LLMs for evaluation within my current setup. While setting up LLM monitoring using Phoenix, I require evaluations with the traces, I am only able to find evaluation llms with apis such as openai etc.

If not, support for locally deployed LLMs for evaluation should be added. This would allow for evaluations and monitoring in a local environment, particularly with Nvidia Triton inference server integrated with LlamaIndex. Using external LLMs for evaluation are better but my requirement is of data privacy and ofcourse minimizing the cost.

I am using Nvidia Triton for running an inference server, which is the used with the llamaIndex integration for Nvidia triton. If this is already possible kindly point in the right direction.

mikeldking commented 4 months ago

Hi @Talhamuhammadali we support LiteLLM for pointing to local LLMs right now. Does this work for your use? https://docs.arize.com/phoenix/api/evaluation-models#litellmmodel

Talhamuhammadali commented 4 months ago

Thanks for the response, they do have support for it for embedding models will explore if this works