ScottLogic / prompt-injection

Application which investigates defensive measures against prompt injection attacks on an LLM, with a focus on the exposure of external tools.
MIT License
13 stars 9 forks source link

Dummy endpoint for load testing #830

Closed chriswilty closed 5 months ago

chriswilty commented 5 months ago

For load testing with k6, we have the potential to spend a lot of money on OpenAI access while we work out how best to configure and run the tests.

It would be an idea to add a (temporary) endpoint for testing to the backend, that we can hit instead of posting actual chat requests. This endpoint should ideally do something that involves computation, maybe using some memory, before returning the response, so that we can simulate a request that does not simply respond immediately (like the healthcheck endpoint).

I suggest we use POST /test/load as our endpoint, to make it explicit.

Note that there is likely to be value in hitting openai with chats for real once or twice, once we have configured and tweaked our tests using the dummy endpoint. In fact, there could also be value in leaving this test endpoint in place.

FYI @kclark-scottlogic