If no API key is available in the keys.json file or LLM_MISTRAL_KEY environment variable, get_model_ids() fails silently when called in the register_models hook. This results in what appears to be a successful installation of the plugin, but no Mistral models are available.
❯ llm -m "mistral-large" hey
Error: 'mistral-large' is not a known model
I think this behavior is reasonable, but a bit hard to understand as an end user. I propose either documenting that this is what happens if the API key is missing or somehow using the models from the DEFAULT_ALIASES dict rather than relying on the API to generate the model list.
For the former, documenting that LLM_MISTRAL_KEY is the environment variable name used by the plugin would help. I was using the Python API, so I skipped the llm key set mistral step and naively assumed setting the API key as MISTRAL_API_KEY would work.
For the latter approach, the root issue will be more obvious to the end user, since the models would register without the API key then fail on the inference API request:
❯ llm -m "mistral-large" hey
Error: Client error '401 Unauthorized' for url 'https://api.mistral.ai/v1/chat/completions'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
If no API key is available in the
keys.json
file orLLM_MISTRAL_KEY
environment variable,get_model_ids()
fails silently when called in theregister_models
hook. This results in what appears to be a successful installation of the plugin, but no Mistral models are available.I think this behavior is reasonable, but a bit hard to understand as an end user. I propose either documenting that this is what happens if the API key is missing or somehow using the models from the
DEFAULT_ALIASES
dict rather than relying on the API to generate the model list.For the former, documenting that
LLM_MISTRAL_KEY
is the environment variable name used by the plugin would help. I was using the Python API, so I skipped thellm key set mistral
step and naively assumed setting the API key asMISTRAL_API_KEY
would work.For the latter approach, the root issue will be more obvious to the end user, since the models would register without the API key then fail on the inference API request: