NVIDIA / NeMo-Guardrails

NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
Other
3.71k stars 325 forks source link

[DOCS] Inconsistency in supported LLMs engine names #520

Open Chotom opened 1 month ago

Chotom commented 1 month ago

In Supported LLM Models section there is info about supported engines:

You can use any LLM provider that is supported by LangChain, e.g., ai21, aleph_alpha, anthropic, anyscale, azure, cohere, huggingface_endpoint, huggingface_hub, openai, self_hosted, self_hosted_hugging_face. Check out the LangChain official documentation for the full list.

The problem is that the documentation doesn't mention any particular page in the LangChain docs. I would guess that it references this page: Langchain > Components > LLMs. But there is also LangChain page dedicated to certain providers: Langchain > Providers.

And while some of the listed names seem to match the LangChain LLMs components (like e.g. ai21, aleph_alpha, anthropic), the other ones like azure or huggingface_hub, in my opinion, create inconsistency in name convention. The below image shows what I'm talking about (there is no particular provider called only azure):

image

So the questions are:

  1. Which part of LangChain documentation does Namo Guardrails refer to?
  2. Where I could find the real list of supported LLMs/engines in Nemo?
drazvan commented 1 month ago

Hi @Chotom!

Thanks for pointing out these inconsistencies/gaps in the documentation. We'll try to fix it for the next release. To answer your questions:

  1. The documentation is meant to refer to Langchain > Components > LLMs. The providers page was added more recently to LangChain, when the support for certain LLM providers has moved to dedicated repositories.
  2. You can use the snippet below to list the list of models:
from nemoguardrails.llm.helpers import get_llm_instance_wrapper
from nemoguardrails.llm.providers import providers
from pprint import pprint

# List the names of supported LLM providers
pprint(providers.get_llm_provider_names())

# Detailed list with mapped classes
pprint(providers._providers)

Last, but not least, NeMo Guardrails has a mechanism for easily registering any LLM implementing the LangChain interface: https://docs.nvidia.com/nemo/guardrails/user_guides/configuration-guide.html?#custom-llm-models. This in combination with get_llm_instance_wrapper can enable registering easily any LLM instance that you can instantiate in regular Python code. E.g.:

from nemoguardrails.llm.providers import register_llm_provider

def init(rails: LLMRails):
    llm = ...
    custom_llm_provider = get_llm_instance_wrapper(
        llm=llm, llm_type="<some custom name>"
    )
    register_llm_provider("<some custom name>", custom_llm_provider)

Let me know if you have any additional questions.

Chotom commented 1 month ago

Thanks! Especially this code clarifies any doubts in my case:

from pprint import pprint
from nemoguardrails.llm.providers import providers

pprint(providers._providers)

It would be great to have it in the Nemo docs somewhere in Supported LLM Models section.