Closed chrisbsmith closed 2 years ago
If interested, I have a successful PR into my fork that I could merge upstream. https://github.com/chrisbsmith/uptime-kuma-helm/pull/2
Hey @chrisbsmith the reason behind using a Deployment is just I didn't know better :D Created the Chart via the common Helm commands which just provide basic skeletons.
I would really be interested your change proposals and I think you are complelety right that the Statefulset fits the application.
Please open a new Pull Request with your changes 🚀
Totally get it! I only knew about this from struggling with issues in the past.
Alright, submitted ~#11~ #12. Let me know if you have any questions on it.
Thanks!!
FYI: You had the right approach in the beginning with Deployment + PVC. StatefulSets are used to scale-out stateful workloads since you define volumeClaimTemplates which allow the K8s Controller to replicate PVCs as you increase replicas
. Very useful for HA workloads.
Since uptime-kuma is using SQLite, it only allows for a single instance. Once support for PgSQL (#959) or MySQL (#953) is added, StatefulSets would make sense. But you'd also want to add the PgSQL/MySQL DB as a dependency.
Thanks for this explanation @maxirus. So in the current state, things are really a wash? Is there a benefit to reverting to Deployment + PVC? I should've done more research on the uptime-kuma model, but I don't see a harm to leaving it as a StatefulSet, especially if a team is working to add SQL support.
@chrisbsmith Yes. "Weird" things will start to happen. I say weird in the sense that it's not what a typical user would expect. For example, you'd want Deployment
to fail if the PVC is already attached in a scaling event. With a StatefulSet, it'd just create a new Volume, with new data, and in the event of a scale-in; you'd potentially lose the data.
@maxirus, got it. These make sense now that you describe them. These events you outlined aren't critical when the data is being persisted to a remote service. But when using a remote storage solution like SQL, you wouldn't need to put uptime-kuma in a STS anyways; instead you would do that with the backend.
@dirsigler apologies for push this change. At least we got to learn a little something here.
But when using a remote storage solution like SQL, you wouldn't need to put uptime-kuma in a STS anyways; instead you would do that with the backend
Correct; unless for some reason they decide to persist config data, cache, etc. to disk. I'm not too familiar with the project as I don't intend to use it until a proper DB is supported. IMHO, it's important that your tool doing the monitoring be HA. In a world with Docker compose and K8s, this becomes trivial to use MySQL/PgSQL.
@chrisbsmith no worries! I didn't know either, but I am happy to learn new things.
Would then say we revert the changes later on to again spin up a Deployment and PVC.
EDIT: and big thanks @maxirus for the insights ❤️
@dirsigler my suggestion on this would be not to revert the changes but add a switch that changes the deployment from deploy to sts with a default value on deploy. So it is easy for the future to change that, as soon as Uptime Kuma is able to handle it and if someone wants to use a sts before that there is an easy way of doing that.
Of course, a breaking change in changing a default value must be communicated, but that's a bridge we'll cross when we come to it.
Interesting approach, indeed. Already reviewed your changes and approved them.
With these changes we default to Deployments but keep the Statefulset "design" for the future. As maxirus already mentioned here: https://github.com/dirsigler/uptime-kuma-helm/issues/10#issuecomment-998256557 we then can also add later MySQL/Postgres as a dependency when all the changes were made on the upstream App.
I'm just getting started with Uptime Kuma and was looking for a helm chart for it (thanks btw!) and came across your repo. However, I was wondering why you chose to use a deployment + pvc instead of a statefulset to deploy the pod? I've had issues in the past where a new pod was unable to start because the terminating pod was still running and hanging onto the PVC.
My understanding is that this is the exact situation where a Statefulset is useful. Maybe I don't understand the uptime-kuma infrastructure requirements well enough though.