run-llama / llama_index

LlamaIndex is a data framework for your LLM applications
https://docs.llamaindex.ai
MIT License
35.62k stars 5.03k forks source link

[Question]: How to use guardrails with the query engine #16162

Open JINO-ROHIT opened 1 day ago

JINO-ROHIT commented 1 day ago

Question Validation

Question

Im trying to attach nemo guardrails or any other guardrails to the query engine layer, how can i do this?

nodes = retriever_metadata.retrieve(prompt) print(nodes) #check threshold

response = query_engine.query(prompt) for text in response.response_gen: full_response += text + "" time.sleep(0.05) message_placeholder.markdown(full_response + "▌")

dosubot[bot] commented 1 day ago

To attach Nemo guardrails or any other guardrails to the query engine layer, follow these steps:

  1. Install necessary packages:

    %pip install llama-index-llms-openai
    %pip install llama-index-output-parsers-guardrails
    %pip install guardrails-ai
  2. Load documents and build the VectorStoreIndex:

    from llama_index import VectorStoreIndex, SimpleDirectoryReader
    
    # Load documents
    documents = SimpleDirectoryReader("./data/paul_graham/").load_data()
    
    # Build the index
    index = VectorStoreIndex.from_documents(documents, chunk_size=512)
  3. Define the Guardrails Spec and create the Guard object:

    rail_spec = """
    <rail version="0.1">
    
    <output>
       <list name="points" description="Bullet points regarding events in the author's life.">
           <object>
               <string name="explanation" format="one-line" on-fail-one-line="noop" />
               <string name="explanation2" format="one-line" on-fail-one-line="noop" />
               <string name="explanation3" format="one-line" on-fail-one-line="noop" />
           </object>
       </list>
    </output>
    
    <prompt>
    
    Query string here.
    
    @xml_prefix_prompt
    
    {output_schema}
    
    @json_suffix_prompt_v2_wo_none
    </prompt>
    </rail>
    """
  4. Create the output parser and attach it to the LLM object:

    from llama_index.output_parsers.guardrails import GuardrailsOutputParser
    from llama_index.llms.openai import OpenAI
    
    # Create output parser object
    output_parser = GuardrailsOutputParser.from_rail_string(rail_spec, llm=OpenAI())
    
    # Attach to an LLM object
    llm = OpenAI(output_parser=output_parser)
  5. Attach the LLM with guardrails to the query engine:

    query_engine = index.as_query_engine(llm=llm)
    response = query_engine.query(
       "What are the three items the author did growing up?",
    )
    
    print(response)

By following these steps, you can successfully attach Nemo guardrails or any other guardrails to the query engine layer [1][2][3].

To continue talking to Dosu, mention @dosu.