-
Hi team,
Is it possible to configure Nemo-Guardrails to avoid sending the actual user input to the LLM? I understand that the actual user input won't be sent if the input rails are triggered. However…
-
## Description
When using ChatBedrock with streaming enabled and Bedrock guardrails applied, the streaming functionality breaks. This appears to be due to differences in the response structure when…
-
**Describe the bug**
I am trying to use the custom LLM wrapper so that I can add guardrails using a NVIDIA TensorRT LLM (TRT-LLM). I do not wish to use openai/azure openai for the guardrails call.
…
w8jie updated
2 weeks ago
-
-
LLMs with public access typically require guardrails to prevent misuse and hacking. Guardrails will try to keep generated output safe and responsible, in accordance with SoilWise standards.
-
Hello, below is my config.yml file
rails:
input:
flows:
- self check input
- mask sensitive data on input
- jailbreak detection heuristics
config:
sensitive_data_detecti…
-
I am currently exploring the ways to integrate nemoguardrails to my existing RAG chatbot system.
**Guardrail Script**
```
config = RailsConfig.from_content(
yaml_content=YAML_C…
-
in `llms/bedrock.py`, line 682 the function `_is_guardrails_intervention` uses `body.get(GUARDRAILS_BODY_KEY) == "GUARDRAIL_INTERVENED"` to detect intervention, where GUARDRAILS_BODY_KEY is defined as…
-
**Describe the bug**
Update the `regex` dependency version for compatibility with other libraries
**Expected behavior**
A clear and concise description of what you expected to happen.
**Librar…
-
Hello!
After the input rail, when the library runs the ```generate_user_intent```, is there a way to stop further finding ```generate_next_step``` and ```generate_bot_message```. As we have a sepra…