Closed alex-heim closed 11 months ago
Hello @alex-heim ,
We recently updated the library and there are a few breaking changes, and you seem to have hit one of them!
In 0.3.0, the guard call now returns a ValidationOutcome
object with multiple attributes including the raw LLM response and the validated response.
You can now access both the responses like this:
raw_llm_response, validated_response, *rest = guard(…)
Apologies for the loss in clarity. We're soon posting the migration guide that you can view in detail. Please update guardrails-ai to 0.3.0 using pip install -U guardrails-ai
If that doesn't help, can you please add more details regarding the issue?
Thanks Karan! Appreciate the help. I've been able to just pull the ValidationOutcome into one variable so, I think that's working fine for me. The issue seems to be with the callable function. I've tried passing in the raw openai.completions.create
function as well as the wrapped OpenAI function from Autogen and both seem to fail. Maybe it has something to do with the recent change in the OpenAI package?
I just tried testing with openai v1.3.8
and guardrails-ai 0.3.0
, works fine without any errors. Try re-installing in a new, fresh virtual environment, and if it still doesn't work, can you add the entire error stack trace - will be helpful to debug.
I also just tried with openai v1.3.8 and guardrails-ai 0.3.0 and getting the same error.
[Call(iterations=[Iteration(inputs=Inputs(llm_api=<guardrails.llm_providers.ArbitraryCallable object at 0x2994b1010>, llm_output=None, instructions=None, prompt=Prompt(
What kind of pet should I get and what should...), msg_history=None, prompt_params={'doctors_notes': 'hi'}, num_reasks=1, metadata={}, full_schema_reask=True), outputs=Outputs(llm_response_info=None, parsed_output=None, validation_output=None, validated_output=None, reasks=[], validator_logs=[], error='The callable `fn` passed to `Guard(fn, ...)` failed with the following error: `create() takes 1 argument(s) but 2 were given`. Make sure that `fn` can be called as a function that takes in a single prompt string and returns a string.', exception=PromptCallableException('The callable `fn` passed to `Guard(fn, ...)` failed with the following error: `create() takes 1 argument(s) but 2 were given`. Make sure that `fn` can be called as a function that takes in a single prompt string and returns a string.')))], inputs=CallInputs(llm_api=<bound method Completions.create of <openai.resources.completions.Completions object at 0x2994b4090>>, llm_output=None, instructions=None, prompt='\n What kind of pet should I get and what should I name it?\n\n ${doctors_notes}\n\n ${gr.complete_json_suffix_v2}\n', msg_history=None, prompt_params={'doctors_notes': 'hi'}, num_reasks=1, metadata={}, full_schema_reask=True, args=[], kwargs={'engine': 'text-davinci-003'}))]
ok i think i tracked it down. @alex-heim are you submitting your openai key this way? openai.api_key= instead of os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
Yes that's spot on @edk208 ! The newer versions of OpenAI require setting the API key using os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
instead of with openai.api_key="YOUR_API_KEY"
. We'll update the docs accordingly. Thanks!
Here's how it should look like now:
import openai
import os
from rich import print
os.environ["OPENAI_API_KEY"] = "YOUR_API_KEY"
# Wrap the OpenAI API call with the `guard` object
validation_outcome = guard(
openai.completions.create,
prompt_params={"doctors_notes": doctors_notes},
engine="text-davinci-003",
max_tokens=1024,
temperature=0.3,
)
# Print the validated output from the LLM
print(validation_outcome)
The output:
ValidationOutcome(
raw_llm_output='{\n "gender": "Male",\n "age": 49,\n "symptoms": [\n {\n "symptom": "macular
rash",\n "affected_area": "head"\n },\n {\n "symptom": "Itchy, flaky, slightly scaly",\n
"affected_area": "neck"\n }\n ],\n "current_meds": [\n {\n "medication": "OTC steroid cream",\n
"response": "Moderate"\n }\n ]\n}',
validated_output={
'gender': 'Male',
'age': 49,
'symptoms': [
{'symptom': 'macular rash', 'affected_area': 'head'},
{'symptom': 'Itchy, flaky, slightly scaly', 'affected_area': 'neck'}
],
'current_meds': [{'medication': 'OTC steroid cream', 'response': 'Moderate'}]
},
reask=None,
validation_passed=True,
error=None
)
Closing this issue, opened a new one to reflect the learnings from this.
Apologies for the basic question, but I'm attempting to recreate the starting example from the docs. However, when I run this block:
I get the following error: error='The callable fn passed to Guard(fn, ...) failed with the following error: create() takes 1 argument(s) but 2 were given. Make sure that fn can be called as a function that takes in a single prompt string and returns a string.
Has found a way to resolve this?