This controller is installed in to Civo K3s client clusters and handles the mounting of Civo Volumes on to the correct nodes and promoting the storage into the cluster as a Persistent Volume.
There are three parts to a CSI driver - node, controller and identity. The easiest way to think about these is:
The order for calls is usually:
GetPluginCapabilities
is calledProbe
is calledCreateVolume
is called to create it in the Civo APIControllerPublishVolume
to attach the volume to the correct nodeNodeStageVolume
is called to format the volume (if not already formatted) and mount it to a node-wide set of mount pointsNodePublishVolume
is called to bind mount that mount point into the pod's specific mount point.NodeUnpublishVolume
NodeUnstageVolume
.ControllerUnpublishVolume
.At this point the volume still exists and still contains data. If the operator wants to delete it, then the kubectl pv delete ...
will actually call Controller's DeleteVolume
. If a PV is requested, the Kubernetes control plane will ensure space is available with Controller's GetCapacity
, and if the operator lists all volumes this is done with Controller's ListVolumes
.
civo-api-access
within the kube-system
namespace containing keys of api-key
, api-url
, cluster-id
, namespace
and region
./var/lib/kubelet/plugins/csi.civo.com
(for writing a socket to that is shared between containers)/etc/civostatsd
containing a TOML set of configuration (the same that's used for https://github.com/civo/civostatsd so it should already be available), for example:server="https://api.civo.com"
token="12345678901234567890"
region="NYC1"
instance_id="12345678-1234-1234-1234-1234567890"
Normally for our Civo Kubernetes integrations we'd recommend visiting the getting started document for CivoStack guide, but this is a different situation (installed on the client cluster, not the supercluster), so below are some similar sort of steps to get you started:
Unlike Operators, you can't as easily run CSI drivers locally just connected in to a cluster (there is a way with socat
and forwarding Unix sockets, but we haven't experimented with that yet).
So the way we test our work is:
The CSI Sanity suite is integrated as well as some custom unit tests and is a simple go test
away 🥳
This will run the full Kubernetes Storage SIG's suite of tests against the endpoints we're supposed to have implemented to comply with the spec.
The steps are:
IMAGE_NAME
with a random or recognisable name (export IMAGE_NAME=$(uuidgen | tr '[:upper:]' '[:lower:]')
works well)docker build -t ttl.sh/${IMAGE_NAME}:2h .
docker push ttl.sh/${IMAGE_NAME}:2h
deploy/kubernetes
folder to deploy/kubernetes-dev
with cp -rv deploy/kubernetes deploy/kubernetes-dev
and replace all occurences of civo-csi:latest
in there with YOUR_IMAGE_NAME:2h
(ENV variable interpolation won't work here), this folder is automatically in .gitignore
Secret
within the civo-system
called api-access
containing the keys api-key
set to your Civo API key, api-url
pointing to either https://api.civo.com
or a xip.io/ngrok pointing to your local development environment (depending on where your cluster is running) and region
set to the region the current cluster is running inkubectl apply -f deploy/kubernetes-dev
There are e2e tests included in this repo that will provision a cluster and run integrations against the cluster. These tests require a valid Civo API key to be present in the .env file (sample provided) in the root of the project dir. The tests can be found in the e2e directory and can be run with the following
go test -v ./e2e/...
To aid in development, the tests can be run with a -retain
flag to persist the provisioned cluster between test runs:
go test -v ./e2e/... -retain