ScottLogic / prompt-injection

Application which investigates defensive measures against prompt injection attacks on an LLM, with a focus on the exposure of external tools.
MIT License
16 stars 10 forks source link

Manual Test Guidelines Wiki or ReadME #848

Closed kclark-scottlogic closed 5 months ago

kclark-scottlogic commented 7 months ago

Adding in documentation somewhere about manual testing steps/guidelines needed for regression