guardrails-ai / guardrails

Adding guardrails to large language models.
https://www.guardrailsai.com/docs
Apache License 2.0
4.04k stars 307 forks source link

Brainstorming validators to add #46

Closed ShreyaR closed 9 months ago

ShreyaR commented 1 year ago

Currently, most of the validators in guardrails are deterministic checks.

As a framework, guardrails can also support probabilistic validations. As an example: for a string output, check if the sentiment in the output is positive or negative.

Opening this issue to brainstorm ideas about what would be good deterministic and non-deterministic validators to add.

neubig commented 1 year ago

We're developing a library that makes it possible to validate generated text according to different criteria:

I sent one PR here, and if that looks useful I could add other ones too! https://github.com/ShreyaR/guardrails/pull/48

ShreyaR commented 1 year ago

@neubig that's a great suggestion, having a deeper integration between Guardrails and Critique would be awesome! Happy to offer any support as you're adding demos. I'll also start hacking on adding a few pieces in.

neubig commented 1 year ago

Great! That'd be awesome. It might be nice if it was possible to have a somewhat more generalized way to call the various Critique metrics (without re-implementing the boilerplate in every notebook). I can think about that more, but any pointers are welcome too.

krandiash commented 1 year ago

Just added a PR #61 that shows an example of a validator that uses embeddings to do a semantic check. The Critique validators sound super cool @neubig, maybe one way is to subclass Validator (CritiqueValidator(Validator)), do the boilerplate setup in the superclass and then implement all the downstream ones?

smohiuddin commented 9 months ago

we've added a number of validators that are more probabilistic in nature like provenance, etc. resolving this now.