-
**Describe the bug**
LangchainLLMWrapper has .generate_text() function which further calls .generate_prompt() of the underlying LLM. The LangchainLLMWrapper passes 'temperature' parameter in .gener…
-
In [Supported LLM Models](https://docs.nvidia.com/nemo/guardrails/user_guides/configuration-guide.html#supported-llm-models) section there is info about supported engines:
> You can use any LLM provi…
-
> NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational applications.
URL: https://github.com/NVIDIA/NeMo-Guardrails
-
I am implementing nemo-guardrails for `conversationchain` from `langchain` following this [guide](https://github.com/NVIDIA/NeMo-Guardrails/blob/develop/docs/user_guides/langchain/chain-with-guardrail…
-
We may want to elaborate on the default checklist items, create a new checklist, or add an example that focuses on ethical considerations that are unique to LLMs or that are particularly acute with LL…
-
This issue somewhat overlaps with #27. However, I've chosen to create this issue because it was mentioned in the thread of #27 that support for other LLM models would be added by the end of May. Suppo…
-
API: https://github.com/NVIDIA/NeMo-Guardrails?tab=readme-ov-file#guardrails-server
Related issue: https://github.com/openhackathons-org/End-to-End-LLM/issues/16
-
on version 0.6.1 nemo guardrails always calculates the bot intent as general response which always generates a message "I'm not sure what to say."
are there anyway to avoid that?
-
I try this code :
!rm -r config
!pip install -q -U google-generativeai
import pathlib
import textwrap
import google.generativeai as genai
from IPython.display import display
from IPyt…
-
Many docs in this repo instruct giving HTTP/S proxies on Docker build command line:
```
$ git grep -e "--build-arg.*https*_proxy=" | wc -l
58
```
IMHO it would be better to just specify them on…