guardrails-ai / guardrails

Adding guardrails to large language models.
https://www.guardrailsai.com/docs
Apache License 2.0
4.07k stars 310 forks source link

feat (langchain): support configurable fields and alternatives for guard and validator runnables #954

Open kaushikb11 opened 3 months ago

kaushikb11 commented 3 months ago

Description

A configurable_fields method. This lets you configure particular fields of a runnable. This is related to the .bind method on runnables, but allows you to specify parameters for a given step in a chain at runtime rather than specifying them beforehand. A configurable_alternatives method. With this method, you can list out alternatives for any particular runnable that can be set during runtime, and swap them for those specified alternatives.

https://python.langchain.com/v0.2/docs/how_to/configure/

Expected Interface

base_guard = Guard().use_many(
    CompetitorCheck(competitors=["delta", "american airlines", "united"], on_fail="fix"),
    ToxicLanguage(on_fail="remove"),
).to_runnable()

configurable_guard = base_guard.configurable_alternatives(
    ConfigurableField(id="guard"),
    default_key="content",
    regex=Guard().use(
        RegexMatch(regex=r'\b[A-Z]{3}\b', on_fail="exception")
    ).to_runnable(),
)

# Define the LCEL chain components
prompt = ChatPromptTemplate.from_template("Answer this question {question}")
output_parser = StrOutputParser()

chain = prompt | model | output_parser | configurable_guard

# Use the default guard (CompetitorCheck and ToxicLanguage)
result_content = chain.invoke({"question": "What are the top five airlines for domestic travel in the US?"})
print("Content Check Result:", result_content)

# Use the RegexMatch guard
result_regex = chain.with_config(configurable={"guard": "regex"}).invoke(
    {"question": "List three major airport codes in the US."}
)
print("Regex Check Result:", result_regex)
Aman123lug commented 1 month ago

@kaushikb11 I want to work on this issue. or are there any starter/good first issue?