TODO: Explain the concept around the cyber instancer
Please note: setup for this application is done in 3 stages: deploying a kubernetes cluster, setting up a docker container repository, and (finally) actually deploying this app into the kubernetes cluster.
There are essentially 3 ways to run this application:
config.yml
, k3s.yaml
, and then use docker composeCopy config.example.yml
to config.yml
and update with your information. See below for more info:
login_secret_key
can be created by running the following in a python3 interpreter:import base64
import secrets
base64.b64encode(secrets.token_bytes(32))
Do NOT share this, or else an attacker will be able to login as whomever they wish!
admin_team_id
: UUID of admin account. Decode a login token to get an account's UUID, and then set the UUID here.redis
: connection information for redis. If using the kubernetes config files below, docker compose, or vagrant, set host: redis-service
and delete the port and password options. If you have a separate redis host, set that here.postgres
: connection information for postgres. If using docker-compose, make sure the host is db
, and that the username, password, and database name match the corresponding config options in docker-compose.yml
in_cluster
: set if it will be deployed in a cluster. If not, will use a k3s.yaml
file at the top level directory to authenticate with cluster.redis_resync_interval
: How often to sync between active clusters and the local cache, deleting instances as necessary.dev
: Enables some developer debugging api endpoints. Do not enable in production.url
: URL to the instancer.challenge_host
: IP or hostname that points to the kube cluster. Usually same as url
but without http(s)rctf_mode
: boolean; whether or not the instancer is integrated into our custom fork of rctf. This will disable registration, disable team database capabilities, disables generating login urls on the instancer directly, and redirects back to the rctf platform when appropriate instead of to instancer pages. Defaulted to false, but we generally only use the instancer in rctf mode, so standalone mode will not be tested as thoroughlyrctf_url
: url to the instancer. Only applies if in rctf modesession_length
: number of seconds that a session is active (or 3600 * number of hours). Defaults to one day./etc/rancher/k3s/k3s.yaml
, and modify clusters[0].cluster.server
or similar to be the actual remote ip address and not 127.0.0.1
.sudo kubectl
by running kubectl apply -f PATH/TO/FILE
on the machine with k3s installed, or like kubectl apply -f -
to read from stdin.Secret
with a Cloudflare API token that has permission to edit zone DNS for the domain you want to put challenges on:apiVersion: v1
kind: Secret
metadata:
name: cloudflare-token
type: Opaque
stringData:
api-token: "TOKEN-GOES-HERE"
Issuer
to solve ACME dns01 challenges using the secret:apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-issuer
spec:
acme:
email: "EMAIL@GOES.HERE"
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-issuer-key
solvers:
- dns01:
cloudflare:
apiTokenSecretRef:
name: cloudflare-token
key: api-token
Note that cloudflare on its free plan does NOT offer certificates for *.subdomain.domain.tld
, so you will need to disable cloudflare's reverse proxy for at least sub-subdomains.
Certificate
using the issuer:apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: wildcard-domain
spec:
secretName: wildcard-domain
issuerRef:
name: letsencrypt-issuer
kind: Issuer
group: cert-manager.io
commonName: "*.DOMAIN.GOES.HERE"
dnsNames:
- "DOMAIN.GOES.HERE"
- "*.DOMAIN.GOES.HERE"
TLSStore
using the certificate:apiVersion: traefik.containo.us/v1alpha1
kind: TLSStore
metadata:
name: default
spec:
certificates:
- secretName: wildcard-domain
defaultCertificate:
secretName: wildcard-domain
apiVersion: v1
kind: Namespace
metadata:
labels:
kubernetes.io/metadata.name: cyber-instancer
name: cyber-instancer
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cyber-instancer
namespace: cyber-instancer
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cyber-instancer
namespace: cyber-instancer
rules:
- apiGroups: [""]
resources: ["services", "namespaces"]
verbs: ["list", "get", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["list", "get", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["networking.k8s.io"]
resources: ["ingresses", "networkpolicies"]
verbs: ["list", "get", "watch", "create", "update", "patch", "delete"]
- apiGroups: ["traefik.containo.us"]
resources: ["ingressroutes"]
verbs: ["list", "get", "watch", "create", "update", "patch", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cyber-instancer-binding
namespace: cyber-instancer
subjects:
- kind: ServiceAccount
name: cyber-instancer
namespace: cyber-instancer
roleRef:
kind: ClusterRole
name: cyber-instancer
apiGroup: rbac.authorization.k8s.io
apiVersion: v1
kind: Secret
metadata:
name: instancer-config
namespace: cyber-instancer
type: Opaque
stringData:
config: |-
secret_key: asdf
foo: bar
YOUR_DOCKER_REGISTRY
and YOUR_DOMAIN
accordingly, keeping in mind that the domain has to match the certificate domain in order for https to work properly):apiVersion: apps/v1
kind: Deployment
metadata:
name: cyber-instancer
namespace: cyber-instancer
labels:
app.kubernetes.io/name: cyber-instancer
spec:
replicas: 4
selector:
matchLabels:
app.kubernetes.io/name: cyber-instancer
template:
metadata:
labels:
app.kubernetes.io/name: cyber-instancer
spec:
serviceAccountName: cyber-instancer
containers:
- name: app
image: YOUR_DOCKER_REGISTRY/cyber-instancer:latest
ports:
- containerPort: 8080
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 50m
memory: 64Mi
volumeMounts:
- name: config
mountPath: "/app/config.yml"
readOnly: true
subPath: "config.yml"
volumes:
- name: config
secret:
secretName: instancer-config
items:
- key: config
path: config.yml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: cyber-instancer
labels:
app.kubernetes.io/name: redis
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: redis
template:
metadata:
labels:
app.kubernetes.io/name: redis
spec:
containers:
- name: redis
image: redis:7-alpine
ports:
- containerPort: 6379
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 50m
memory: 64Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cyber-instancer-worker
namespace: cyber-instancer
labels:
app.kubernetes.io/name: cyber-instancer-worker
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: cyber-instancer-worker
template:
metadata:
labels:
app.kubernetes.io/name: cyber-instancer-worker
spec:
serviceAccountName: cyber-instancer
containers:
- name: app
image: YOUR_DOCKER_REGISTRY/cyber-instancer:latest
ports:
- containerPort: 8080
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 50m
memory: 64Mi
command: ["python", "worker.py"]
volumeMounts:
- name: config
mountPath: "/app/config.yml"
readOnly: true
subPath: "config.yml"
volumes:
- name: config
secret:
secretName: instancer-config
items:
- key: config
path: config.yml
---
apiVersion: v1
kind: Service
metadata:
name: cyber-instancer-service
namespace: cyber-instancer
labels:
app.kubernetes.io/name: cyber-instancer-service
spec:
selector:
app.kubernetes.io/name: cyber-instancer
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 31337
---
apiVersion: v1
kind: Service
metadata:
name: redis-service
namespace: cyber-instancer
labels:
app.kubernetes.io/name: redis-service
spec:
selector:
app.kubernetes.io/name: redis
ports:
- protocol: TCP
port: 6379
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: cyber-instancer-ingress
namespace: cyber-instancer
spec:
entryPoints:
- web
- websecure
routes:
- match: Host(`YOUR_DOMAIN`)
kind: Rule
services:
- name: cyber-instancer-service
port: 8080
Create a table for the database:
fixture.sql
, replacing the domains with your own. If using docker compose, on first run, docker compose will automatically build the database.In this repository, we are using many different linters to help format all of the different types of code involved in the project. To install the checks, install pre-commit, and then run pre-commit install
. Now, pre-commit will run on all staged files, and will stop you from making a commit that fails a check. Note that after pre-commit fails in a commit, it will format the files properly, but you still need to git add
those changes.
Runs app in a development enviornment. Requires Docker Compose (and by extension docker) to be installed.
docker compose up --build -d
(same as npm run dev
): (re)starts images, rebuilding react and running flask server on port 8080docker compose down
: Stops flask serverVagrant is a program that allows for automatic virtual machine deployment. It itself does NOT run virtual machines; rather, it will hook into other software such as VMWare Workstation, Hyper-V, or VirtualBox to run the machines.
vagrant
must be installed and setup, including a compatible virtualization software.
sed -i "s/192.168.0/192.168.56/g" *
rsync
must be installed. One way to do so on windows is install it via Cygwin and select both rsync
and ssh
, and on macos, use homebrew with brew install rsync
.is_arm64
in k3-vagrant/Vagrantfile
: set this to return true
if you are on an m1/m2 mac, return false
if on x86_64 (basically everything else).k3-vagrant
directory, then run vagrant up
. You may need to use --provider=virtualbox
or --provider=vmware_desktop
if vagrant chooses the wrong virtualization software - run vagrant destroy
if it gives an error about already deployed vms. This may take a while depending on your system. Note that some of the command's repsonse may be be red - this is normal.vagrant provision
.vagrant suspend
will suspend the vms, allowing for safe resuming, vagrant halt
will fully shutdown the vms (unsupported).vagrant destroy
will delete the vms.192.168.0.10
(vmware), or 192.168.56.10
(virtualbox)./etc/hosts
file:
testing.instancer.local
. Adding instancer.local
pointing to the above IP will allow for accessing the instancer website./etc/hosts
: Add to the bottom of /etc/hosts
following the below format:192.168.0.10 instancer.local
192.168.0.10 testing.instancer.local
192.168.0.10 testing2.instancer.local
...
/etc/hosts
, and you may need to disable secure DNS in order for the browser to use /etc/hosts
.These commands are more or less legacy since the react app is heavily dependent on a backend existing. Nevertheless, they are still here.
npm run build
Builds the app for production to the build
folder.
It bundles React in production mode and optimizes the build for the best performance.
npm run dev
Same as running docker compose up --build -d
in project root: see above
npm run lint
Test linter against code to ensure code conformity. Superceded by pre-commit checks.
npm run preview
Builds just the react app and runs a preview. Does not startup any backend server and will probably be non-functional.