ethz-spylab / satml-llm-ctf

Code used to run the platform for the LLM CTF colocated with SaTML 2024
https://ctf.spylab.ai
MIT License
23 stars 5 forks source link

LLM CTF

This is the code used to run the 2024 SaTML LLM CTF. The code was developed from scratch by:

The app is a FastAPI web server, with a MongoDB database, and a Redis cache. The web server is served by Uvicorn. Everything runs in Docker and docker compose.

Note that the platform was developed while the competition was running and the exact specifics were being detailed, so not all design decisions were optimal.

We ran the application on a single Google Cloud VM with 64GB of RAM and 32 vCPUs. This was enough for most of the competition, but the most heated phases were running a bit too slow.

Some potential improvements that could be done to the platform (PRs welcome!) are:

Setting up the environment

  1. Create a .env file with the same content as .env.example, and change the values as needed.
  2. Create a .env.prod file with the same content as .env.example, and change the values as needed.
  3. Create a .secrets folder with the same content as secrets.example, and change the values as instructed in each file.

How to (re)start the application

docker compose --env-file .env.prod -f compose.prod.yml up --build -d

or

docker compose --env-file .env.prod -f compose.prod.yml up --build -d web

To only start the web service container. If the container(s) are already running, then they will be re-built and re-started.

Development

docker compose up --build -d web

Production

docker compose --env-file .env.prod -f compose.prod.yml up --build -d web

Stopping

docker compose down

Use web if you want to start the app, otherwise don't specify to start everything

Checking the logs

docker compose logs -f

The -f flag behaves like in cat.

Linting and code style

Lint with

ruff check --fix .

Format with

ruff format .

Check types with

mypy .