Open jmartin-tech opened 7 months ago
Another recent finding related to multi-modal prompts is a need to define relationships between parts of the prompt. The case identified is that some models request formats may have different expectations for referencing images in text. The current visual_jailbreak
prompts include a placeholder in the text
segment of the prompt that some models may need to remove
or replace
with an API specific linking/embedding.
The current generator interface expects to receive prompts as
str
see: https://github.com/leondz/garak/blob/4127ae5092ad3acaba680a32011018fc564cc92a/garak/generators/base.py#L66This initial simple submission process has worked to date; however #587 show an example of a query prompt that needs a more complex structure. In this case the
Multi-modal
model accepts both text and image data to generate a response.I propose an added abstraction layer by implementing a
Prompt
base interface class that be extended to model these more complex prompts to be processed by each generator.or possibly also abstracting the response as well:
Prompts can then be further segmented into things like
TextPrompt
,MultiStepTextPrompt
,VisualPrompt
,VisualTextPrompt
and other such constructs to that on the base functions available to allow use with different and even mixed prompt modalities for models that can accept various input patterns.Rough example: