NVIDIA / NeMo-Guardrails

NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
Other
3.95k stars 359 forks source link

Setting API keys for guardrails server #519

Closed serhatgktp closed 1 month ago

serhatgktp commented 3 months ago

Hello,

I understand that guardrails server makes requests to LLM providers using API keys that are read from the environment variables of the machine hosting the server.

Is it currently possible to specify which API key to use per config, or in some other way such as specifying which token to use for each LLM provider in a config file? Further, is it possible to specify API keys in any way other than using environment variables?

Thanks

drazvan commented 3 months ago

Hi @serhatgktp!

The use of environment variables comes from the LangChain implementation of the LLM providers themselves. Typically, they also support providing the API key directly as a parameter (in which case they won't use the env variable). The additional parameters can be specified through the parameters field when defining the models. For example, for OpenAI, you can specify openai_api_key:

models:
- type: main
  engine: openai
  model: gpt-4
  parameters:
    openai_api_key: "..."

The same approach should be applicable across the board. What we don't have support for is to specify a custom environment variable in config.yml, which could be useful, e.g.:

models:
- type: main
  engine: openai
  model: gpt-4
  parameters:
    openai_api_key: $MAIN_OPENAI_KEY

We should add this to the roadmap.