BerriAI / litellm

Python SDK, Proxy Server to call 100+ LLM APIs using the OpenAI format - [Bedrock, Azure, OpenAI, VertexAI, Cohere, Anthropic, Sagemaker, HuggingFace, Replicate, Groq]
https://docs.litellm.ai/docs/
Other
12.03k stars 1.39k forks source link

Support nemo guardrails on proxy #2070

Open krrishdholakia opened 6 months ago

krrishdholakia commented 6 months ago

API: https://github.com/NVIDIA/NeMo-Guardrails?tab=readme-ov-file#guardrails-server

Related issue: https://github.com/openhackathons-org/End-to-End-LLM/issues/16

krrishdholakia commented 6 months ago

It looks like guardrails server is a proxy for the llm api call, and can't be called as a component like llm guard.

Weird :/ https://github.com/NVIDIA/NeMo-Guardrails/blob/develop/docs/user_guides/advanced/using-docker.md