Closed geekyme-fsmk closed 1 year ago
Hi @geekyme-fsmk,
The llm-rubric
assert type uses GPT-4 by default, which is why it's asking for the OpenAI API key. You can override the LLM grader using one of these methods: https://promptfoo.dev/docs/configuration/expected-outputs#overriding-the-llm-grader
Note that the grader uses a chat-completion style prompt by default. If you need to override it or modify the prompt, you can set the rubricPrompt
property on the test or assertion: https://promptfoo.dev/docs/configuration/expected-outputs#assertion-properties
Hope this helps solve your issue!
hi @typpo , understood - is it possible then to use my Azure Open AI key instead? I do have a subscription already with GPT4 enabled.
Alright I ended up doing this:
defaultTest:
options:
provider: azureopenai:chat:gpt-35-turbo
Then specifying my env vars accordingly:
AZURE_OPENAI_API_HOST=<azure_base> AZURE_OPENAI_API_KEY=<api_key> promptfoo eval -c promptfooconfig.yaml
Getting this error when I run the evaluation:
https://www.promptfoo.dev/docs/configuration/testing-llm-chains#using-a-script-provider
Following the guide above, I set up a custom script provider:
How the
ai_client_provider.py
looks like:AIClient
internally would load its own azure openai keys and secrets. Not sure why I am getting the errors from here when I've already specified a custom script.My config does not appear to be going down the script completion path.
promptfoo --version
=> 0.20.1