Open tomdyson opened 1 year ago
LLM v1 strategy implemented in https://github.com/tomdyson/prettyprompt/commit/b1d8700bfd9779149f82ec4df3ab6cd9875d437d
See 'Defences and Recommendations' section from https://research.nccgroup.com/2022/12/05/exploring-prompt-injection-attacks/
Possible strategies: