Closed Anindyadeep closed 1 year ago
Hey, Guardrails natively supports a couple different LLMs (OpenAI, Cohere), as well as Manifest, which supports many more. You can also pass in an any arbitrary python function that takes "prompt" and "instructions" as arguments.
Please see the LLM API docs for more details.
I'm closing this issue for now, but feel free to reopen it if you run into any issues :)
Yes, thanks @irgolic, exactly what I was looking for
Description There are several open source LLMs and open source LLM providers right now. Examples:
Can we provide support for guardrails for these sets of models and providers?
Why is this needed
Open AI gpt API is not the only one, which needs validation in end to end LLM pipelines. It is as important for open source LLMs to when developers are building and shipping use case to production.
Implementation details As far as I have saw the code base, we might not need to do breaking changes. Rather we might need to change the way we call the function like as we used here as
openai.Completion.create
, similarly we need to have support for the llm calls function for llm providers.End result
If this feature get's implemented then we can do validation checks and evaluations for in house LLM without relying on Open AI. This will be very much useful as an evaluation procedure for fine tuning too and integrating newer LLMs in the process of CI/CD.
Here is the sample code:
This can be done similarly for gpt4all, llama cpp assuming the user had already installed dependencies. Our job would be to just call the function and run under guadrails.