guardrails-ai / guardrails

Adding guardrails to large language models.
https://www.guardrailsai.com/docs
Apache License 2.0
3.88k stars 290 forks source link

[bug] Returning un-expected Error , not-consistent #168

Closed tv-ankur closed 1 year ago

tv-ankur commented 1 year ago

Describe the bug I am trying to generate the pre-screening questions for recruiter using the job description, but it gives you in-consistent result. Since I am using the open ai, it don't give me exact error- mentioned here too.

To Reproduce

rail_str = """
<rail version="0.1">

<output>
    <list name="pre_screening_questions" description="Generate the list of pre-screening questions based on given job description." format="length: 2 10" on-fail-length="noop">
        <object>
            <integer name="qa_id" description="The question's id." format="1-indexed" />
            <string name="question" description="The Pre-screening Question text."  />
            <string name="answer" description="The Pre-screening Answer text." />
         </object>
    </list>
</output>

<prompt>
Generate a dataset of pre-screening questions and brief answers to shortlist potential candidates that matches with the following job description:
{{job_description}}. Return a JSON that follows the correct schema.

@complete_json_suffix</prompt>

</rail>
"""
guard = gd.Guard.from_rail_string(rail_str)
raw_llm_response, validated_response = guard(openai.ChatCompletion.create, prompt_params={"job_description": job_description_text},model="gpt-3.5-turbo", max_tokens=3000, temperature=0)

print(validated_response)

Errors I am getting different errors when re-run the program.

Error-1: Traceback (most recent call last): File "/Users/ankurkhandelwal/Desktop/Python/Github/ChatPdf/pre_screen_question_rail.py", line 41, in raw_llm_response, validated_response = guard(openai.ChatCompletion.create, prompt_params={"job_description": job_description_text}, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/guard.py", line 166, in call guard_history = runner(prompt_params=prompt_params) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/run.py", line 90, in call validated_output, reasks = self.step( ^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/run.py", line 147, in step validated_output = self.validate(index, output_as_dict, output_schema) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/run.py", line 266, in validate validated_output = output_schema.validate(output_as_dict) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/schema.py", line 332, in validate validated_response = self[field].validate( ^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/datatypes.py", line 269, in validate schema = validator.validate_with_correction(key, value, schema) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/validators.py", line 204, in validate_with_correction return self.validate(key, value, schema) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/validators.py", line 609, in validate last_val = [value[-1]]


KeyError: -1

**Error-2:**
Traceback (most recent call last):
  File "/Users/ankurkhandelwal/Desktop/Python/Github/ChatPdf/pre_screen_question_rail.py", line 41, in <module>
    raw_llm_response, validated_response = guard(openai.ChatCompletion.create, prompt_params={"job_description": job_description_text},
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/guard.py", line 166, in __call__
    guard_history = runner(prompt_params=prompt_params)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/run.py", line 90, in __call__
    validated_output, reasks = self.step(
                               ^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/run.py", line 147, in step
    validated_output = self.validate(index, output_as_dict, output_schema)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/run.py", line 266, in validate
    validated_output = output_schema.validate(output_as_dict)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/schema.py", line 332, in validate
    validated_response = self[field].validate(
                         ^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/datatypes.py", line 278, in validate
    value = item_type.validate(i, item, value)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/datatypes.py", line 319, in validate
    child_key, value.get(child_key, None), value
               ^^^^^^^^^
AttributeError: 'str' object has no attribute 'get'

**Library version:**
guardrails_ai==0.1.6 and 0.1.7

**Additional context**
irgolic commented 1 year ago

Hi, sorry for the late reply.

It works fine for me if I use an instructions tag. Could you try using a spec like this?

<rail version="0.1">

<output>
    <list name="pre_screening_questions" description="Generate the list of pre-screening questions based on given job description." format="length: 2 10" on-fail-length="noop">
        <object>
            <integer name="qa_id" description="The question's id." />
            <string name="question" description="The Pre-screening Question text."  />
            <string name="answer" description="The Pre-screening Answer text." />
         </object>
    </list>
</output>
<instructions>
You are a helpful assistant only capable of communicating with valid JSON, and no other text.

@json_suffix_prompt_examples
</instructions>

<prompt>
Generate a dataset of pre-screening questions and brief answers to shortlist potential candidates that matches with the following job description:

{{job_description}}

Extract information from this document and return a JSON that follows the correct schema.

@xml_prefix_prompt

{output_schema}
</prompt>

</rail>