-
- Environment
- TensorRT 9.0.0.1
- Versions of CUDA==12.0
- Container used: registry.cn-hangzhou.aliyuncs.com/trt-hackathon/trt-hackathon:final_v1
- NVIDIA driver version
- Re…
-
Is there a way, where we can get the sentence as well along with the full results that is generated in the toxicity scanner in input scanner.
That would be really helpful to have as it would get an …
-
I have Implemented Neoguardrails in Retrieval QA chain with llama2 model but it is giving error
My code:
```
MODEL_TYPE=GPT4All
MODEL_PATH=r'C:\Users\komal\Desktop\mages\chatbot\llama-2-7b-ch…
-
Hi,
I run the following sample script given in the README.md file after the installation of this evaluator with the command `guardrails hub install hub://arize-ai/relevancy_evaluator`.
```
import…
-
### System Info
- CPU: i9 9900k
- GPU: RTX 4090
- TensorRT-LLM Version: 0.9.0.dev2024022000
- Cuda Version: Cuda 12.3
- Driver Version: 545.29.06
- OS: Arch Linux, kernel version 6.7.5
### …
-
```
=> ERROR [frontend build 7/7] RUN BACKEND_API_URL=http://localhost:8000 REACT_APP_SOURCES=local,youtube,wiki,s3 LLM 8.0s
------
> [frontend build 7/7] RUN BACKEND_API_URL=http://local…
-
![image](https://github.com/user-attachments/assets/e4b763ef-97d4-4042-8d4e-2f010da8319f)
Attempt to install the requested package, however it still hit the same error message when I re-run the follo…
-
Hello, I get different safety predictions using `Llama Guard` through `HuggingFace`'s `Transformers` and `vLLM`.
As for `Transformers` I copy-pasted code from `Llama Guard` model card, I am assuming …
-
**Describe the bug**
4 days ago **nltk** did a breaking change in the 3.8.2 release. The issue is described [here](https://github.com/nltk/nltk/issues/3293). This causes any applications which depend…
-
llm-guard pip not upgraded to latest code change of transformers update. pip llm-guard still points to transformers 4.41.2