Open nogweii opened 4 months ago
Hi, you can just disable the docker integration by setting AUTOKUMA__DOCKER__ENABLED=false
.
As for Kubernetes, I think native Kubernetes support (i.e. using CRDs e.t.c.) is way out of scope for AutoKuma, reading (Pod, Deployment, Daemonset, etc..) labels using the Kubernetes API on the other hand is something I see as possible, although I don't have any plans for implementing this myself (i.e. I'm open for PRs).
Depending on how you manager you cluster (i.e. using Terraform/OpenTofu, Pulumi, e.t.c), you might be able to automaticaly mount specific ConfigMaps as static Monitor Definitions.
Hi, you can just disable the docker integration by setting
AUTOKUMA__DOCKER__ENABLED=false
.As for Kubernetes, I think native Kubernetes support (i.e. using CRDs e.t.c.) is way out of scope for AutoKuma, reading (Pod, Deployment, Daemonset, etc..) labels using the Kubernetes API on the other hand is something I see as possible, although I don't have any plans for implementing this myself (i.e. I'm open for PRs).
Depending on how you manager you cluster (i.e. using Terraform/OpenTofu, Pulumi, e.t.c), you might be able to automaticaly mount specific ConfigMaps as static Monitor Definitions.
there is a problem with mounting configMaps as volumes, due to how it is being done in kubernetes (creates links) and then how autokuma tries to sync it:
WARN [autokuma::sync] Encountered error during sync: Unable to deserialize: Unsupported static monitor file type: /autokuma/static-monitors/..2024_08_26_08_07_18.3501614066, supported: .json, .toml
when I mount configMap in k8s it looks like this:
root@autokuma-58fb5b9fdf-b97mc:/autokuma/static-monitors# ls
example.json
root@autokuma-58fb5b9fdf-b97mc:/autokuma/static-monitors# ls -lah
total 12K
drwxrwxrwx 3 root root 4.0K Aug 26 08:07 .
drwxr-xr-x 3 root root 4.0K Aug 26 08:07 ..
drwxr-xr-x 2 root root 4.0K Aug 26 08:07 ..2024_08_26_08_07_18.3501614066
lrwxrwxrwx 1 root root 32 Aug 26 08:07 ..data -> ..2024_08_26_08_07_18.3501614066
lrwxrwxrwx 1 root root 19 Aug 26 08:07 example.json -> ..data/example.json
for now I am using subPath
as workaround, but it is not fun to manage as every json needs to be a separate volume mount in k8s deployment.
As possible solution AutoKuma could simply look for .json and .toml files in the directory and ignore everything else.
I don't have experience with k8s, but if it helps this project implemented discovery support via annotations.
I've added a native kubernetes integration for creating Monitors using CRs. This works fine using a local minikube cluster. However since I haven't used Kubernetes in some time I'll need some help in creating a set of deployment yamls for a typical deployment, just a basic set which would work in a typical cluster with rbac enabled etc.
Additionally I'd need to know what settings I need to make configurable.
If anyone who's looking for a native kubernetes integration can provide these things I'll enable the integration with the next release.
Wonderful news! Not an expert but I'd like to help if needed. It's really strange how AutoKuma is little known compared to the popularity of Uptime Kuma - They have a huge user base and 600+ contributors! To me AutoKuma is the missing core of Uptime - without it i'd never use it (Devops Engineer here) - because gitops is so much better than UI based configuration.
If you can release it, i'd like to try it as soon as possible - I can share my deployment yamls if it works out well. yaml deployments are nice but ultimately a helm chart is probably best suited.
You already mentioned a minikube deployment - those should be enough for now anyway.
@emouawad the integration is available in the dev channel (ghcr.io/bigboot/autokuma:master
) .
You'd need to enable the integration (and probably disable docker so it doesn't spam the logs) with AUTOKUMA__KUBERNETES__ENABLED=true
(and AUTOKUMA__DOCKER__ENABLED=false
).
The CRDs need to be applied beforehand, you can find them at autokuma/kubernetes/crds-autokuma.yml.
An example CR looks like this:
apiVersion: "autokuma.bigboot.dev/v1"
kind: KumaEntity
metadata:
name: hello-k8s-monitor
spec:
config:
name: Static Json Example
type: http
url: https://example.com
The integration should pick up your service account/cluster config automatically when running inside the cluster.
Hey @BigBoot - Works in GKE - Thanks!
RBAC Needed
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: autokuma
rules:
- apiGroups: ["autokuma.bigboot.dev"]
resources: ["*"]
verbs: ["list", "patch", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: autokuma-binding
subjects:
- kind: ServiceAccount
name: default
namespace: uptime-kuma
roleRef:
kind: ClusterRole
name: autokuma
apiGroup: rbac.authorization.k8s.io
@BigBoot Would it be possible to add Status Pages as well?
I believe the CR might need a bit of changing to add a unique id/key and bind on it status pages? to make friendly name updatable?
There's #81 for keeping track of this, and ENTITY_TYPES.md describing how resolve the different kinds of entities. As of now support for status pages is notably completely missing though.
Unfortunately I cannot influence the actual uptime kuma id, it get's assigned by uptime kuma on the database insert, most entity types support resolving by an "autokuma id" in the case of kubernetes CRs that's the metadata.name
.
@BigBoot are you still interested in the K8S deployment yaml files? I'm happy to provide mine, I deployed AutoKuma on EKS and it works perfectly fine.
@tschlaepfer Hi you may be able to share your code, I'm trying to do it on a cluster of k0s, but I find it impossible to initialize it with the uptime-kuma page.
I include my code to see if you can somehow identify what the fault may be.
apiVersion: apps/v1 kind: Deployment metadata: name: autokuma namespace: kuma spec: replicas: 1 selector: matchLabels: app: autokuma template: metadata: labels: app: autokuma spec: containers:
apiVersion: v1 kind: ConfigMap metadata: name: kuma-static-monitors namespace: kuma data: example.json: | { "name": "Test nginx server", "type": "http", "url": "http://nginx-service.default.svc.cluster.local:9113/metrics" }
@aelogonpin I have shared my deployment in another issue in this project, please have a look here: https://github.com/BigBoot/AutoKuma/issues/91#issuecomment-2457517176
From a quick look at your deployment, I think you are missing the AUTOKUMA__STATIC_MONITORS
environment variable.
Hope this helps.
I am thinking about setting up AutoKuma to run in my homelab, but it is powered by Kubernetes running on containerd rather than Docker.
Can I run AutoKuma in my homelab, at least with just static configs for the time being?
(It would also be cool if it could support a custom resource in Kubernetes as an alternative to container labels.)