Open strentom opened 3 days ago
Hi @strentom, definitely a bug. Just clarifying, are you running into this error with a redteam config? Based on the logs, you are hitting a redteam codepath which is different from the example provided and different from the behavior when I run it locally.
Hi @typpo . I’m running ‘promptfoo eval -c script_above.yaml —verbose’ . I have other files and configs in the folder (incl. redteam related) but these should be ignored.
Got it, this is helpful thanks. #1877 should fix the immediate issue. There is another issue of your redteam config being picked up when it shouldn't.
Describe the bug The promptRubric is ignored or the imputed parameters are not inserted. I can't tell because the logs (with
--verbose
) are insufficient.To reduce likelihood of my error, I searched for existing issues and copied
promptRubric
from an issue that is resolved (#823). Doing eval on the same YAML (only difference is provider), it silently fails (doesn't produce expected result) and from LLM output I hypothesize that the LLM didn't receive the full prompt. This can be reproduced even with simpler prompts, but I wanted to be sure that the YAML is 100% correct.To Reproduce Take this YAML as from #823 (modified only the provider):
and run
promptfoo eval
on it.Expected behavior The promptRubric should rate the translation quality. Instead, it responds:
{"pass":false,"reason":"No rubric was provided","score":0,"tokensUsed":{"total":201,"prompt":179,"completion":22,"cached":0}}
Screenshots
System information: