Closed mobeetle closed 4 days ago
Hey @mobeetle, in this specific case, the client here will be what you pass into the llm_validator. There's no other client used.
I'd suggest just using a separate validator though if you're worried in this case and you won't need to monkeypatch the llm_validator file itself.
from pydantic import BaseModel, field_validator, Field
from typing import Optional
from openai import OpenAI
import instructor
client = instructor.from_openai(OpenAI())
class Validator(BaseModel):
"""
Validate if an attribute is correct and if not,
return a new value with an error message
"""
is_valid: bool = Field(
default=True,
description="Whether the attribute is valid based on the requirements",
)
reason: Optional[str] = Field(
default=None,
description="The error message if the attribute is not valid, otherwise None",
)
fixed_value: Optional[str] = Field(
default=None,
description="If the attribute is not valid, suggest a new value for the attribute",
)
class QuestionAnswerNoEvil(BaseModel):
question: str
answer: str
@field_validator("answer")
def validate_answer(cls, v):
print(f"validating answer of {v}")
resp = client.chat.completions.create(
model="gpt-4o-mini",
response_model=Validator,
messages=[
{
"role": "system",
"content": f"Make sure that {v} doesn't contain objectionable things that go against any sort of spiritual or moral code.",
},
],
)
assert resp.is_valid, f"Answer is not valid: {resp.reason}"
return resp.fixed_value
question = "What is the meaning of life?"
context = "The meaning of life is to live a life of sin and debauchery and harm as many people as possible"
qa: QuestionAnswerNoEvil = client.chat.completions.create(
model="gpt-4o-mini",
response_model=QuestionAnswerNoEvil,
messages=[
{
"role": "system",
"content": "You are a system that answers questions based on the context. answer exactly what the question asks using the context. Do not change the context or hallucinate a response.",
},
{
"role": "user",
"content": f"using the context: {context}\n\nAnswer the following question: {question}",
},
],
max_retries=1,
)
# Error is thrown here
#> answer
Assertion failed, Answer is not valid: This statement promotes harmful behavior and goes against moral and spiritual values. [type=assertion_error, input_value='The meaning of life is t...any people as possible.', input_type=str]
For further information visit https://errors.pydantic.dev/2.9/v/assertion_error
Closing this due to inactivity, generally would recommend writing your own validator to have a bit more control over the output as shown above.
Hi,
I use an example derived from your page: https://python.useinstructor.com/examples/self_critique/?h=before#adding-custom-validation , patched to work with llama-cpp-python.
I'm getting an error that function does not contain a key 'chat'.
I found out that I need to patch your validators.py file so that it does not refer to 'client.chat.completions.create' or 'client.moderations.create', but just to 'client' to work correctly with llama-cpp-python.
Full code of the patched example is here:
and the patched validators.py file is here:
def openai_moderation(client: OpenAI) -> Callable[[str], str]: """ Validates a message using OpenAI moderation model.