kubero-dev / kubero

A free and self-hosted PaaS for Kubernetes
https://demo.kubero.dev
GNU General Public License v3.0
2.68k stars 106 forks source link

Struggled with Configuring DigitalOcean's Load Balancer, Now Cannot Deploy Applications, Still #405

Open diraneyya opened 1 month ago

diraneyya commented 1 month ago

What would you like to share?

I am currently using the following load-balancer settings on DigitalOcean:

    kubernetes.digitalocean.com/load-balancer-id: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
    service.beta.kubernetes.io/do-loadbalancer-disable-lets-encrypt-dns-records: "false"
    service.beta.kubernetes.io/do-loadbalancer-enable-proxy-protocol: "true"
    service.beta.kubernetes.io/do-loadbalancer-redirect-http-to-https: "true"
    service.beta.kubernetes.io/do-loadbalancer-size-unit: "2"
    service.beta.kubernetes.io/do-loadbalancer-tls-passthrough: "true"
    service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"

As can be seen here: https://kubero.theworkgroup.org, the website is insecure even though the certificate is valid:

image

Note that I am new to all of this so I do not have much of a troubleshooting strategy but I read the documentation thoroughly and all of it checks out fine.

Additional information

kubero debug outputs the following:

Kubero CLI
kuberoCLIVersion: v2.4.0
OS: darwin
Arch: arm64
goVersion: go1.21.13

Kubernetes
clientVersion:
  buildDate: "2024-06-11T20:29:44Z"
  compiler: gc
  gitCommit: 39683505b630ff2121012f3c5b16215a1449d5ed
  gitTreeState: clean
  gitVersion: v1.30.2
  goVersion: go1.22.4
  major: "1"
  minor: "30"
  platform: darwin/arm64
kustomizeVersion: v5.0.4-0.20230601165947-6ce0bf390ce3
serverVersion:
  buildDate: "2024-09-11T21:22:08Z"
  compiler: gc
  gitCommit: 948afe5ca072329a73c8e79ed5938717a5cb3d21
  gitTreeState: clean
  gitVersion: v1.31.1
  goVersion: go1.22.6
  major: "1"
  minor: "31"
  platform: linux/amd64

Kubero Operator
NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
kubero-operator-controller-manager   1/1     1            1           7h26m

Kubero Operator Image
gcr.io/kubebuilder/kube-rbac-proxy:v0.11.0

Kubero UI
NAME                                   READY   UP-TO-DATE   AVAILABLE   AGE
kubero                                 1/1     1            1           6h36m
kubero-prometheus-kube-state-metrics   1/1     1            1           159m
kubero-prometheus-server               1/1     1            1           6h50m

Kubero UI Ingress
NAME     CLASS   HOSTS                     ADDRESS     PORTS     AGE
kubero   nginx   kubero.theworkgroup.org   localhost   80, 443   6h36m

Kubero UI Secrets
NAME                                      TYPE                 DATA   AGE
kubero-secrets                            Opaque               4      6h39m
kubero-tls                                kubernetes.io/tls    2      21m
registry-basic-auth                       Opaque               1      6h36m
registry-login                            Opaque               3      6h36m
sh.helm.release.v1.example.v272           helm.sh/release.v1   1      89s
sh.helm.release.v1.example.v273           helm.sh/release.v1   1      29s
sh.helm.release.v1.kubero-prometheus.v4   helm.sh/release.v1   1      159m
sh.helm.release.v1.kubero.v1              helm.sh/release.v1   1      6h36m

Kubero UI Image
ghcr.io/kubero-dev/kubero/kubero:latest

Cert Manager
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
cert-manager              1/1     1            1           6h54m
cert-manager-cainjector   1/1     1            1           6h54m
cert-manager-webhook      1/1     1            1           6h54m

Cert Manager Cluster Issuers
NAME               READY   AGE
letsencrypt-prod   True    6h51m

More command output

NAMESPACE NAME APPROVED DENIED READY ISSUER REQUESTOR AGE
kubero kubero-tls-1 True True letsencrypt-prod system:serviceaccount:cert-manager:cert-manager 88m
NAME READY AGE
letsencrypt-prod True 7h6m
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"cert-manager.io/v1","kind":"ClusterIssuer","metadata":{"annotations":{},"name":"letsencrypt-prod"},"spec":{"acme":{"email":"info@theworkgroup.org","privateKeySecretRef":{"name":"letsencrypt"},"server":"https://acme-v02.api.letsencrypt.org/directory","solvers":[{"http01":{"ingress":{"class":"nginx"}}}]}}}
  creationTimestamp: "2024-09-17T16:24:58Z"
  generation: 1
  name: letsencrypt-prod
  resourceVersion: "16978"
  uid: 2cc70ab9-8cd1-484c-887b-7b86d8dd38ba
spec:
  acme:
    email: info@theworkgroup.org
    privateKeySecretRef:
      name: letsencrypt
    server: https://acme-v02.api.letsencrypt.org/directory
    solvers:
    - http01:
        ingress:
          class: nginx
status:
  acme:
    lastPrivateKeyHash: r38RkGMnNBTpkfhUf6bvgamWEBNf5t56R/8Jk3srFg4=
    lastRegisteredEmail: info@theworkgroup.org
    uri: https://acme-v02.api.letsencrypt.org/acme/acct/1951867176
  conditions:
  - lastTransitionTime: "2024-09-17T16:24:59Z"
    message: The ACME account was registered with the ACME server
    observedGeneration: 1
    reason: ACMEAccountRegistered
    status: "True"
    type: Ready

Note that I am also not able to deploy any services so far. So I am working on solving one problem at a time. I am suspecting that the load-balancer's configuration is the right place to start but I am not entirely sure.

diraneyya commented 1 month ago

Note that I am pointing my A DNS records of both of my domain and all of the subdomains under it to the load-balancer:

image
diraneyya commented 1 month ago

My efforts to deploy anything no matter how simple it is, is also not seem to work. I am not sure if this failure has anything to do with the above, though:

image image image
diraneyya commented 1 month ago

Okay, it might have just needed some time, now it seems okay:

image

Yet, I am still unable to get the main functionality to work, which is to be able to deploy containerised applications, and most importantly, use the wonderful templates.

Any cues as to what remains to be configured to get this wonderful thing app and running?

diraneyya commented 1 month ago

More information about the single-container app

kubectl edit kuberoapp flatnotes -n example-production gives the following:

apiVersion: application.kubero.dev/v1alpha1
kind: KuberoApp
metadata:
  creationTimestamp: "2024-09-17T18:49:36Z"
  generation: 4
  labels:
    manager: kubero
  name: flatnotes
  namespace: example-production
  resourceVersion: "142255"
  uid: 10024ceb-562f-4e3c-969e-a72c53bf905d
spec:
  addons: []
  affinity: {}
  autodeploy: true
  autoscale: true
  autoscaling:
    enabled: true
  branch: main
  buildstrategy: plain
  cronjobs: []
  deploymentstrategy: docker
  envVars:
  - name: FLATNOTES_AUTH_TYPE
    value: password
  - name: FLATNOTES_USERNAME
    value: diraneyya
  - name: FLATNOTES_PASSWORD
    value: 5cBKHhP1GWDv8i
  - name: FLATNOTES_SECRET_KEY
    value: zjUEHMbj53I8No9RX9u0yc0hiXPyeL
  extraVolumes:
  - accessMode: ReadWriteOnce
    accessModes:
    - ReadWriteMany
    emptyDir: false
    mountPath: /app/data
    name: data-volume
    size: 0.2Gi
    storageClass: standard
  fullnameOverride: ""
  gitrepo:
    admin: false
    clone_url: ""
    ssh_url: ""
  image:
    build:
      command: npm install
      repository: node
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          add: []
          drop: []
        readOnlyRootFilesystem: false
        runAsGroup: 0
        runAsNonRoot: false
        runAsUser: 1000
      tag: latest
    containerPort: 8080
    fetch:
      repository: ghcr.io/kubero-dev/fetch
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          add: []
          drop: []
        readOnlyRootFilesystem: false
        runAsGroup: 0
        runAsNonRoot: false
        runAsUser: 1000
      tag: v1
    pullPolicy: Always
    repository: dullage/flatnotes:latest
    run:
      command: node index.js
      readOnlyAppStorage: false
      repository: node
      securityContext:
        allowPrivilegeEscalation: false
        capabilities:
          add: []
          drop: []
        readOnlyRootFilesystem: false
        runAsGroup: 0
        runAsNonRoot: false
        runAsUser: 0
      tag: latest
    tag: latest
  imagePullSecrets: []
  ingress:
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-prod
      kubernetes.io/tls-acme: "true"
      nginx.ingress.kubernetes.io/force-ssl-redirect: true
    className: nginx
    enabled: true
    hosts:
    - host: flatnotes.example.theworkgroup.org
      paths:
      - path: /
        pathType: ImplementationSpecific
    tls:
    - hosts: []
      secretName: flatnotes-tls
  name: flatnotes
  nameOverride: ""
  nodeSelector: {}
  phase: production
  pipeline: example
  podAnnotations: {}
  podSecurityContext: {}
  podsize:
    default: true
    description: 'Small (CPU: 0.25, Memory: 0.5Gi)'
    name: small
    resources:
      limits:
        cpu: 500m
        memory: 1Gi
      requests:
        cpu: 250m
        memory: 0.5Gi
  replicaCount: 1
  resources:
    limits:
      cpu: 500m
      memory: 1Gi
    requests:
      cpu: 250m
      memory: 0.5Gi
  service:
    port: 80
    type: ClusterIP
  serviceAccount:
    annotations: {}
  sleep: disabled
  tolerations: []
  vulnerabilityscan:
    enabled: false
    image:
      repository: aquasec/trivy
      tag: latest
    schedule: 44 23 * * *
  web:
    autoscaling:
      maxReplicas: 2
      minReplicas: 1
      targetCPUUtilizationPercentage: 80
      targetMemoryUtilizationPercentage: 80
    replicaCount: 1
  worker:
    autoscaling:
      maxReplicas: 0
      minReplicas: 0
      targetCPUUtilizationPercentage: 80
      targetMemoryUtilizationPercentage: 80
    replicaCount: 0
status:
  conditions:
  - lastTransitionTime: "2024-09-17T18:49:36Z"
    status: "True"
    type: Initialized
  - lastTransitionTime: "2024-09-17T18:49:36Z"
    message: 'failed to install release: unable to build kubernetes objects from release
      manifest: unable to decode "": json: cannot unmarshal bool into Go struct field
      ObjectMeta.metadata.annotations of type string'
    reason: InstallError
    status: "True"
    type: ReleaseFailed
diraneyya commented 1 month ago

I now tried the Whoogle template which is a project I have interest in.

The following logs are relevant:

{"level":"info","ts":1726616892.2781477,"logger":"helm.controller","msg":"Installed release","namespace":"kubero","name":"whoogle","apiVersion":"application.kubero.dev/v1alpha1","kind":"KuberoPipeline","release":"whoogle"}
{"level":"info","ts":1726616892.7020073,"logger":"helm.controller","msg":"Upgraded release","namespace":"kubero","name":"whoogle","apiVersion":"application.kubero.dev/v1alpha1","kind":"KuberoPipeline","release":"whoogle","force":false}
{"level":"error","ts":1726616893.0559657,"msg":"Reconciler error","controller":"kuberopipeline-controller","object":{"name":"whoogle","namespace":"kubero"},"namespace":"kubero","name":"whoogle","reconcileID":"7dd1a750-9387-4ffa-89f5-44527f7597e0","error":"Operation cannot be fulfilled on kuberopipelines.application.kubero.dev \"whoogle\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:326\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:234"}
{"level":"info","ts":1726616893.461924,"logger":"helm.controller","msg":"Upgraded release","namespace":"kubero","name":"whoogle","apiVersion":"application.kubero.dev/v1alpha1","kind":"KuberoPipeline","release":"whoogle","force":false}
{"level":"info","ts":1726616893.897856,"logger":"helm.controller","msg":"Upgraded release","namespace":"kubero","name":"whoogle","apiVersion":"application.kubero.dev/v1alpha1","kind":"KuberoPipeline","release":"whoogle","force":false}
{"level":"error","ts":1726616894.2406867,"msg":"Reconciler error","controller":"kuberopipeline-controller","object":{"name":"whoogle","namespace":"kubero"},"namespace":"kubero","name":"whoogle","reconcileID":"f8521e1b-1c97-4e4d-9650-00f3689107df","error":"Operation cannot be fulfilled on kuberopipelines.application.kubero.dev \"whoogle\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:326\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:273\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.13.0/pkg/internal/controller/controller.go:234"}
{"level":"info","ts":1726616894.6749716,"logger":"helm.controller","msg":"Upgraded release","namespace":"kubero","name":"whoogle","apiVersion":"application.kubero.dev/v1alpha1","kind":"KuberoPipeline","release":"whoogle","force":false}
{"level":"info","ts":1726616895.083216,"logger":"helm.controller","msg":"Upgraded release","namespace":"kubero","name":"whoogle","apiVersion":"application.kubero.dev/v1alpha1","kind":"KuberoPipeline","release":"whoogle","force":false}
{"level":"info","ts":1726616953.8560119,"logger":"helm.controller","msg":"Installed release","namespace":"whoogle-production","name":"whoogle","apiVersion":"application.kubero.dev/v1alpha1","kind":"KuberoApp","release":"whoogle"}
{"level":"info","ts":1726616954.2661686,"logger":"helm.controller","msg":"Reconciled release","namespace":"whoogle-production","name":"whoogle","apiVersion":"application.kubero.dev/v1alpha1","kind":"KuberoApp","release":"whoogle"}

And here is a screenshot:

image

And of course, evidence of the issue:

image

Sorry for spamming you but I am quite interested in this project and very excited to get it up and running!

diraneyya commented 1 month ago

Can I reopen this?

mms-gianni commented 1 month ago

@diraneyya sure.

If you need some fast help you might also join the discord chat ... I think you are close to having it running.

  1. The Metric Server seems to be missing. This might fix this issue : https://docs.kubero.dev/Installation/#metrics-server (I'm surprised why these errors pop up, when not installed)
  2. You need to select the right storage class for your volume (I'll remove it from the template)
diraneyya commented 1 month ago

@mms-gianni thank you for your empathy and understanding, yes I probably should get on the Discord.

I followed these steps a dozen times. Please see below the output:

image

Yet:

image

As for the storage type, I have changed that to do-storage but then was told that this type of storage did not support the ReadWriteMany access mode.

Looking forward to your insight on this. I have not been this excited about any open-source project for a while and really thrilled to get this up and running. Thank you!

mms-gianni commented 1 month ago

yes. Many Kubernetes providers do not support the "ReadWriteMany" option. This is a general Kubernetes topic and must be handled by 3rd party solutions.

Here are 3 options. But most of them add some complexity.