ScottLogic / prompt-injection

Application which investigates defensive measures against prompt injection attacks on an LLM, with a focus on the exposure of external tools.
MIT License
11 stars 9 forks source link

Use all vectorstore documents in QA LLM context #903

Closed chriswilty closed 2 months ago

chriswilty commented 2 months ago

Description

Found and squished a bug in our Q&A process: we were not passing all documents in chat context to the Q&A LLM, as the default documents to retrieve from is 4 and we were not overriding this. This caused an odd symptom in which the bot was seemingly unable to answer a question about employees or salaries in full - instead one needed to piece together a complete answer from several consecutive questions, and even then you'd not know you were in receipt of the complete answer unless you knew what you were looking for.

The fix is simple, though there are related test changes, plus an incorrect comment and lint-ignore statement were also tackled here.

Resolves #902

Screenshots

All salary info found in one call:

image

Checklist

Have you done the following?