-
Even though I think LLMs generally do not work like this, I still wonder whether we could guard against some - otherwise super dumb - LLM to just learn our repo by heart and then achieve great results…
-
**Describe the bug**
Line 27 of `protectai/llm-guard/blob/main/examples/openai_api.py` says `if any(results_valid.values()) is False:`
`any(results_valid.values())` evaluates to True if any of t…
-
**Describe the bug**
I am trying to use the custom LLM wrapper so that I can add guardrails using a NVIDIA TensorRT LLM (TRT-LLM). I do not wish to use openai/azure openai for the guardrails call.
…
w8jie updated
2 weeks ago
-
If i want to install the llm-guard i run always in following issue:
```
ModuleNotFoundError: No module named 'torch'
[end of output]
note: This error originates from a subprocess, and…
-
**Description**
When using following code, I don't know what will be sent to the llm endpoint.
```
guard = Guard.from_pydantic(output_class=Pet, prompt=prompt)
raw_output, validated_output, *res…
-
**Describe the bug**
Update the `regex` dependency version for compatibility with other libraries
**Expected behavior**
A clear and concise description of what you expected to happen.
**Librar…
-
**Describe the bug**
In v0.5, when I run a SensitiveTopic validation with **disable_llm=true** (LLM disabled) and **device** with the default value of -1
- In the validation script, I got the erro…
-
when implemented below code I'm getting an error ..
#'FailResult' object has no attribute 'detect_injection'
from guardrails import Guard
from guardrails.hub import DetectPromptInjection
impo…
-
**Description**
Currently, `guard.validate` only works for guards that are configured to run on the output. Having validate work on guards configured for prompt or inputs would be super helpful to qu…
-
### Feature request
This is a Bert based model however when trying to run, the message says model not supported. https://huggingface.co/meta-llama/Prompt-Guard-86M/tree/main
### Motivation
LLM-pow…