ScottLogic / prompt-injection

Application which investigates defensive measures against prompt injection attacks on an LLM, with a focus on the exposure of external tools.
MIT License
15 stars 10 forks source link

Defence - Sandwich defence #16

Open gsproston-scottlogic opened 1 year ago

gsproston-scottlogic commented 1 year ago

Insert the user input in between two prompts.

https://learnprompting.org/docs/prompt_hacking/defensive_measures/sandwich_defense

Each defence should include the following:

gsproston-scottlogic commented 10 months ago

Too similar to the XML tagging defence. Just remove this?

gsproston-scottlogic commented 8 months ago

Reopening now that the prompt enclosure defence is being added. #703 Blocked until that's merged in. merged now.