dirsigler / uptime-kuma-helm

This Helm Chart installs Uptime-Kuma from @louislam to your Kubernetes Cluster.
https://helm.irsigler.cloud
GNU General Public License v3.0
141 stars 49 forks source link

ArgoCD support? #160

Open RFlintstone opened 3 months ago

RFlintstone commented 3 months ago

Is it possible / do you have a recommended way to easily deploy the helm image with ArgoCD? I got it working with the command but I much rather use Argo.

This only seems to work with helm upgrade for me:

# Chart.yaml
apiVersion: v2
appVersion: "1.23.11"
deprecated: false
description: A self-hosted Monitoring tool like "Uptime-Robot".
home: https://github.com/dirsigler/uptime-kuma-helm
icon: https://raw.githubusercontent.com/louislam/uptime-kuma/master/public/icon.png
maintainers:
  - name: dirsigler
    email: dennis@irsigler.dev
name: uptime-kuma
sources:
  - https://github.com/louislam/uptime-kuma
type: application
version: 2.18.0
# Default values for uptime-kuma.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

image:
  repository: louislam/uptime-kuma
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  tag: "1.23.11-debian"

nameOverride: ""
fullnameOverride: ""

# If this option is set to false a StateFulset instead of a Deployment is used
useDeploy: true

serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: { }
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

podAnnotations: { }
podLabels:
  { }
# app: uptime-kuma
podEnv:
  # a default port must be set. required by container
  - name: "UPTIME_KUMA_PORT"
    value: "3001"

podSecurityContext:
  { }
# fsGroup: 2000

securityContext:
  { }
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000

service:
  #  type: ClusterIP
  type: LoadBalancer
  port: 3001
  nodePort:
  annotations: { }

ingress:
  enabled: true
  className: "traefik"
  extraLabels:
    { }
  # vhost: uptime-kuma.company.corp
  annotations:
    traefik.ingress.kubernetes.io/router.entrypoints: "websecure"
  hosts:
    - host: uptime.mydomain.net
      paths:
        - path: /
          pathType: ImplementationSpecific

  tls:
    #[]
    - hosts:
        - uptime.mydomain.net
      secretName: uptime-mydomain-net-tls
    # - secretName: chart-example-tls
    #   hosts:
    #     - chart-example.local

  labels:
    - "traefik.enable=true"
    - "traefik.http.routers.uptime-kuma.rule=Host(`uptime.mydomain.net`)"
    - "traefik.http.routers.uptime-kuma.entrypoints=https"
    - "traefik.http.routers.uptime-kuma.tls=true"
    - "traefik.http.routers.uptime-kuma.tls.certresolver=letsencrypt-prod"
    - "traefik.http.services.uptime-kuma.loadBalancer.server.port=3001"

resources:
  { }
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
#   cpu: 100m
#   memory: 128Mi

nodeSelector: { }

tolerations: [ ]

affinity: { }

livenessProbe:
  enabled: true
  timeoutSeconds: 2
  initialDelaySeconds: 15

readinessProbe:
  enabled: true
  initialDelaySeconds: 5

volume:
  enabled: true
  accessMode: ReadWriteOnce
  size: 4Gi
  # If you want to use a storage class other than the default, uncomment this
  # line and define the storage class name
  storageClassName: longhorn-sd
  # Reuse your own pre-existing PVC.
  # existingClaim: ""

# -- A list of additional volumes to be added to the pod
additionalVolumes:
  [ ]
  # - name: "additional-certificates"
  #   configMap:
  #     name: "additional-certificates"
#     optional: true
#     defaultMode: 420

# -- A list of additional volumeMounts to be added to the pod
additionalVolumeMounts:
  [ ]
  # - name: "additional-certificates"
  #   mountPath: "/etc/ssl/certs/additional/additional-ca.pem"
#   readOnly: true
#   subPath: "additional-ca.pem"

strategy:
  type: Recreate

# Prometheus ServiceMonitor configuration
serviceMonitor:
  enabled: false
  # -- Scrape interval. If not set, the Prometheus default scrape interval is used.
  interval: 60s
  # -- Timeout if metrics can't be retrieved in given time interval
  scrapeTimeout: 10s
  # -- Scheme to use when scraping, e.g. http (default) or https.
  scheme: ~
  # -- TLS configuration to use when scraping, only applicable for scheme https.
  tlsConfig: { }
  # -- Prometheus [RelabelConfigs] to apply to samples before scraping
  relabelings: [ ]
  # -- Prometheus [MetricRelabelConfigs] to apply to samples before ingestion
  metricRelabelings: [ ]
  # -- Prometheus ServiceMonitor selector, only select Prometheus's with these
  # labels (if not set, select any Prometheus)
  selector: { }

  # -- Namespace where the ServiceMonitor resource should be created, default is
  # the same as the release namespace
  namespace: ~
  # -- Additional labels to add to the ServiceMonitor
  additionalLabels: { }
  # -- Additional annotations to add to the ServiceMonitor
  annotations: { }

  # -- BasicAuth credentials for scraping metrics, use API token and any string for username
  # basicAuth:
  #   username: "metrics"
  #   password: ""

# -- Use this option to set a custom DNS policy to the created deployment
dnsPolicy: ""

# -- Use this option to set custom DNS configurations to the created deployment
dnsConfig: { }
plsnotracking commented 3 months ago

Hi, can you tell me what problems did you run into? Using helm with ArgoCD should work as is? Thanks.

RFlintstone commented 3 months ago

Hi, can you tell me what problems did you run into? Using helm with ArgoCD should work as is? Thanks.

It doesnt seem to deploy at all. When running kubectl on the namespace it should deploy to it says it couldn't find any resources. Helm does deploy it normally on its own but I can't find any errors regarding the deployment as it thinks it has fetched everything.

image image

plsnotracking commented 3 months ago

I'll give it a try tonight and see if I can get it to work. Will report back.

RFlintstone commented 3 months ago

I'll give it a try tonight and see if I can get it to work. Will report back.

Any updates?

plsnotracking commented 3 months ago

@RFlintstone sorry got sidetracked with other issues, I'll definitely try it in a day or two at most.

RFlintstone commented 3 months ago

@RFlintstone sorry got sidetracked with other issues, I'll definitely try it in a day or two at most.

Alright!

jpjonte commented 3 months ago

@RFlintstone it works fine for me with the following config:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: uptime-kuma
  namespace: argocd
  finalizers:
    - resources-finalizer.argocd.argoproj.io
spec:
  project: default
  source:
    repoURL: https://helm.irsigler.cloud
    chart: uptime-kuma
    targetRevision: 2.18.0
    helm:
      valuesObject:
        ingress:
          enabled: true

          annotations:
            traefik.ingress.kubernetes.io/router.entrypoints: websecure,web
            traefik.ingress.kubernetes.io/router.middlewares: kube-system-https-redirect@kubernetescrd

          hosts:
            - host: uptime.my.domain
              paths:
                - path: /
                  pathType: Prefix

          tls:
            - hosts:
              - uptime.my.domain
  destination:
    namespace: uptime-kuma
    server: https://kubernetes.default.svc
  syncPolicy:
    syncOptions:
      - CreateNamespace=true
    automated:
      prune: true
      selfHeal: true
plsnotracking commented 2 months ago

@RFlintstone worked for me

Here's my configuration if it helps

Chart.yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: uptime-kuma
  namespace: argocd
spec:
  destination:
    namespace: uptime-kuma
    name: enterprise
  project: default
  sources:
    # Chart from Chart Repo
    - chart: uptime-kuma
      repoURL: https://helm.irsigler.cloud
      targetRevision: 2.18.0
      helm:
        valueFiles:
        - $values/enterprise/uptime-kuma/values.yaml
        - $values/enterprise/uptime-kuma/pvc.yaml
    # Values from Git
    - repoURL: 'https://git.enterprise.com/user/argocd'
      targetRevision: HEAD
      ref: values
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
      - Replace=true
      - ServerSideApply=true

values.yaml

service:
  type: LoadBalancer
  port: 3001

volume:
  enabled: true
  accessMode: ReadWriteOnce
  size: 4Gi
  # If you want to use a storage class other than the default, uncomment this
  # line and define the storage class name
  # storageClassName:
  # Reuse your own pre-existing PVC.
  existingClaim: "uptime-kuma"

pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  namespace: uptime-kuma
  name: uptime-kuma
spec:
  storageClassName: slow-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 4Gi

Also, have an ingress setup if you need help with that. Let me know if I can help you in anyway. Thanks.

RFlintstone commented 2 months ago

Thank you, I didn't have the time to try it out yet but I'm going to try to do this today or tomorrow.

RFlintstone commented 2 months ago

@RFlintstone worked for me

Here's my configuration if it helps

Chart.yaml

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: uptime-kuma
  namespace: argocd
spec:
  destination:
    namespace: uptime-kuma
    name: enterprise
  project: default
  sources:
    # Chart from Chart Repo
    - chart: uptime-kuma
      repoURL: https://helm.irsigler.cloud
      targetRevision: 2.18.0
      helm:
        valueFiles:
        - $values/enterprise/uptime-kuma/values.yaml
        - $values/enterprise/uptime-kuma/pvc.yaml
    # Values from Git
    - repoURL: 'https://git.enterprise.com/user/argocd'
      targetRevision: HEAD
      ref: values
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
      - CreateNamespace=true
      - Replace=true
      - ServerSideApply=true

values.yaml

service:
  type: LoadBalancer
  port: 3001

volume:
  enabled: true
  accessMode: ReadWriteOnce
  size: 4Gi
  # If you want to use a storage class other than the default, uncomment this
  # line and define the storage class name
  # storageClassName:
  # Reuse your own pre-existing PVC.
  existingClaim: "uptime-kuma"

pvc.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  namespace: uptime-kuma
  name: uptime-kuma
spec:
  storageClassName: slow-storage
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 4Gi

Also, have an ingress setup if you need help with that. Let me know if I can help you in anyway. Thanks.

This didn't seem to work for me. (Also didn't want to deploy as it should). @jpjonte's version did deploy although the ingress didn't want serve the app even though everything was on the right namespace. (And to be honest, I rather use manifests to deploy instead of type argoproj.io/v1alpha1 as it gives you a bit more control)

dirsigler commented 1 month ago

Hey @RFlintstone , is your deployment now working via ArgoCD or do you still face some issues?

RFlintstone commented 1 month ago

Hey @RFlintstone , is your deployment now working via ArgoCD or do you still face some issues?

Still facing issues, sadly.

dirsigler commented 1 month ago

Ok, will try to have a look into it. Need first to get some ArgoCD setup running, so sadly I can not give any ETA...

RFlintstone commented 2 weeks ago

Ok, will try to have a look into it. Need first to get some ArgoCD setup running, so sadly I can not give any ETA...

Any update? 🙂