Closed gilinachum closed 6 months ago
Hi @gilinachum , I met the same issue today, Claude 3 on Bedrock is a Multimodal and the prompt format is different from the text model. I tried to run the benchmark on Claude 3 but failed... If anyone can give me some help?
Hi, this issue will be solved with multi-variable prompt templates in the future. As a workaround right now, you can embed the system prompts in the content_template
if they are static for each inference call.
I see it here. Thx.
System prompt is required for Claude Messaging API and for GPT. FMEval API current predict API doesn't support:
The accuracy of these models depends on being able to separate the system and the user prompt.
Current API: