With the usage of go-workflows, which we have currently configured with SQLite, we will need some durable persistence for the database WAL.
This PR changes the manifest to use a StatefulSet with 2 replicas for now to demonstrate that state distributed across replicas should be fine for this use case. It also adds a volumeClaimTemplate for the database storage.
What's the deal with durability and HA
This project is currently configured to persist workflow state to a local SQLite. It turns out having multiple replicas with their own local SQLite databases should be fine because the workflows are engineered to be idempotent and SpiceDB and Kube API server act as the source of truth.
If 2 replicas ended up receiving the exact same request (e.g. a client-side retry), two different go-workflow instances with their own SQLite will be executing a workflow. If activities from one instance interleave with those from another instance, the individual activities will be coordinated with the state of the backing SpiceDB/Kube API server.
So for all intent and purposes, durability and high-availability for the MVP will be addressed by:
using persistent volumes for SQLite WAL
having multiple replicas with their own independent SQLite WALs, which should be fine
Closes https://github.com/authzed/spicedb-kubeapi-proxy/issues/18 Closes https://github.com/authzed/spicedb-kubeapi-proxy/issues/17
Partially supports https://github.com/authzed/spicedb-kubeapi-proxy/issues/7
With the usage of go-workflows, which we have currently configured with SQLite, we will need some durable persistence for the database WAL.
This PR changes the manifest to use a StatefulSet with 2 replicas for now to demonstrate that state distributed across replicas should be fine for this use case. It also adds a volumeClaimTemplate for the database storage.
What's the deal with durability and HA
This project is currently configured to persist workflow state to a local SQLite. It turns out having multiple replicas with their own local SQLite databases should be fine because the workflows are engineered to be idempotent and SpiceDB and Kube API server act as the source of truth.
If 2 replicas ended up receiving the exact same request (e.g. a client-side retry), two different
go-workflow
instances with their own SQLite will be executing a workflow. If activities from one instance interleave with those from another instance, the individual activities will be coordinated with the state of the backing SpiceDB/Kube API server.So for all intent and purposes, durability and high-availability for the MVP will be addressed by: