guardrails-ai / guardrails

Adding guardrails to large language models.
https://www.guardrailsai.com/docs
Apache License 2.0
4.02k stars 305 forks source link

[feat] what's the prompt generated from Guard.from_pydantic #919

Open jack2684 opened 3 months ago

jack2684 commented 3 months ago

Description When using following code, I don't know what will be sent to the llm endpoint.

guard = Guard.from_pydantic(output_class=Pet, prompt=prompt)

raw_output, validated_output, *rest = guard(
    llm_api=openai.completions.create,
    engine="gpt-3.5-turbo-instruct"
)

Even the history doesn't show the, not to mention I don't want to view prompt only after send it successfully

guard_llm.history.last.compiled_instructions

This will output somehting like

You are a helpful assistant, able to express yourself purely through JSON, strictly and precisely adhering to the provided XML schemas.

I don't see where is provided XML schemas.

Why is this needed I would like to get the original prompt for debugging purpose.

Implementation details [If known, describe how this change should be implemented in the codebase]

End result

guard = Guard.from_pydantic(output_class=Pet, prompt=prompt)
guard.complied_prompt_to_be_sent
dtam commented 2 months ago

hi @jack2684 this should be available at guard.history.last.compiled_prompt Could you happen to share where you got the guard_llm.history.last.compiled_instructions reference so we could update the docs accordingly? Thanks!