Closed tv-ankur closed 1 year ago
Hi, sorry for the late reply.
It works fine for me if I use an instructions
tag. Could you try using a spec like this?
<rail version="0.1">
<output>
<list name="pre_screening_questions" description="Generate the list of pre-screening questions based on given job description." format="length: 2 10" on-fail-length="noop">
<object>
<integer name="qa_id" description="The question's id." />
<string name="question" description="The Pre-screening Question text." />
<string name="answer" description="The Pre-screening Answer text." />
</object>
</list>
</output>
<instructions>
You are a helpful assistant only capable of communicating with valid JSON, and no other text.
@json_suffix_prompt_examples
</instructions>
<prompt>
Generate a dataset of pre-screening questions and brief answers to shortlist potential candidates that matches with the following job description:
{{job_description}}
Extract information from this document and return a JSON that follows the correct schema.
@xml_prefix_prompt
{output_schema}
</prompt>
</rail>
Describe the bug I am trying to generate the pre-screening questions for recruiter using the job description, but it gives you in-consistent result. Since I am using the open ai, it don't give me exact error- mentioned here too.
To Reproduce
Errors I am getting different errors when re-run the program.
Error-1: Traceback (most recent call last): File "/Users/ankurkhandelwal/Desktop/Python/Github/ChatPdf/pre_screen_question_rail.py", line 41, in
raw_llm_response, validated_response = guard(openai.ChatCompletion.create, prompt_params={"job_description": job_description_text},
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/guard.py", line 166, in call
guard_history = runner(prompt_params=prompt_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/run.py", line 90, in call
validated_output, reasks = self.step(
^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/run.py", line 147, in step
validated_output = self.validate(index, output_as_dict, output_schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/run.py", line 266, in validate
validated_output = output_schema.validate(output_as_dict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/schema.py", line 332, in validate
validated_response = self[field].validate(
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/datatypes.py", line 269, in validate
schema = validator.validate_with_correction(key, value, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/validators.py", line 204, in validate_with_correction
return self.validate(key, value, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/guardrails/validators.py", line 609, in validate
last_val = [value[-1]]