ScottLogic / prompt-injection

Application which investigates defensive measures against prompt injection attacks on an LLM, with a focus on the exposure of external tools.
MIT License
15 stars 10 forks source link

14 defence instruction defence #760

Closed heatherlogan-scottlogic closed 8 months ago

heatherlogan-scottlogic commented 8 months ago

Description

Instruction defence from https://learnprompting.org/docs/prompt_hacking/defensive_measures/instruction

Screenshots

image

Notes

Concerns

Checklist

Have you done the following?

heatherlogan-scottlogic commented 8 months ago

Code all looks good, and the defence seems to work correctly on testing, apart from a seperate bug (which is in dev as well) which I've made a ticket for: #762

that bug will be fixed with https://github.com/ScottLogic/prompt-injection/pull/753