-
I am using only the anonymizer scanner with the LLM Guard API, and I noticed significant memory (RAM) increase when processing larger inputs, with the memory usage never decreasing afterward. For exam…
-
I'm running this notebook - https://github.com/protectai/llm-guard/blob/main/docs/tutorials/notebooks/langchain_rag.ipynb
The last 11th chunk from the resume has a prompt injection shown below, bu…
-
### System Info
colab T4
### Who can help?
@
### Information
- [X] The official example scripts
- [ ] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` fold…
-
### Description
When running the following code calling generate method using different models (e.g., Mistral-7B-Instruct-v0.2 and meta-llama-3-8B):
```
from transformers import AutoModelForCausal…
-
* Research a few tools + methods for implementing guard rails for LLM applications.
* Add a recipe that provides developers with an example of implementing guard rails.
-
### Bug Description
Hi,
I noticed that when the LLM responds to the ReAct Agent by using the strict format for the response including the backticks
```
Though: bla bla
Action: blala
```
The…
-
## タイトル: ShieldGemma:Gemmaに基づく生成AIコンテンツモデレーション
## リンク: https://arxiv.org/abs/2407.21772
## 概要:
Gemma2を基盤としたLLMベースの安全コンテンツモデレーションモデル群「ShieldGemma」を紹介します。これらのモデルは、ユーザー入力とLLM生成出力の両方において、主要な有害タイプ(性的描写…
-
When export dynamic-shape llama2, in the attached [piece of the partitioned model](https://github.com/user-attachments/files/16568296/partitioned-model_piece-2.txt), there are symbol manipulations suc…
-
hi i am not able to connect my llamaguard api with nemoguardrail . below are the configs i have used . please help !!!
config :
```yaml
models:
- type: main
engine: openai
model: gp…
-