robjuz / helm-charts

https://robjuz.github.io/helm-charts/index.yaml
34 stars 30 forks source link

[kimai] Volume cannot be attached and configuration changes are not applied #62

Open rhizoet opened 12 months ago

rhizoet commented 12 months ago

We have deployed kimai in the latest version on Kubernetes.

Problem

Now, for example, if we adjust the image.tag value to the current version, the new pod cannot be created because it tries to attach the volume. Unfortunately, this does not work because the volume is still attached to the currently running pod. Only when the ReplicaSet is deleted from the old pod, the new pod can attach the volume and start.

Additionally, changes in the configuration value are not applied when doing a helm upgrade. The pod is not even rebuilt. Changing within the pod is also not possible because the file system is read-only.

What should it look like

The existing pod should release the volume so it can be attached to the new pod. Then the pod can also start.

With each helm upgrade the value configuration should be read in again, so that changes are taken along. Currently the DB and Kimai must be reinitialized, which is not a solution if there is data in it.

robjuz commented 12 months ago

The volume Problem can be easily overcome by setting

updateStrategy:
  type: Recreate

What will delete the pod, recreate it and attach the volume

or by setting

podAffinityPreset: hard

what should force the pod to be created on the same node


I need to take a look on the configuration file problem, but for sure you don't need to delete your database.

rhizoet commented 11 months ago

Any news on the configuration file problem? I cannot update a chart if I had set the configuration: |- value. The Pod does not recognize the change. This is important especially for the saml config.

robjuz commented 11 months ago

I'm on holidays.

Have you tried to delete the pod?

rhizoet commented 11 months ago

Ah okay, happy holidays.

I've deleted the pod several times. But that does not change anything.

rhizoet commented 9 months ago

Any news on this?

robjuz commented 9 months ago

I updated the chart recently. Please try the latest version

W dniu pon., 11.09.2023 o 15:55 Marius @.***> napisał(a):

Any news on this?

— Reply to this email directly, view it on GitHub https://github.com/robjuz/helm-charts/issues/62#issuecomment-1713939514, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACFLPUCQGGN7JK6YS56ZPWLXZ4J3ZANCNFSM6AAAAAA2AQXVIE . You are receiving this because you commented.Message ID: @.***>

rhizoet commented 9 months ago

No changes on the configuration problem. I've updated the title of the saml part in the configuration: |- but nothing changed. Or should I do it now in another way?

robjuz commented 9 months ago

Could you provide some more info about your infrastructure? And maybe a simplified version of your deployment process. Thx.

rhizoet commented 9 months ago

Sure, I can do it with pleasure:

We run Kimai on a K8s cluster with version 1.26.9.

Deployment is done with helm via the console. For this we give a values.yaml with the following content:

kimaiAppSecret: secret
kimaiAdminEmail: admin@mail.com
kimaiAdminPassword: password
ingress:
    enabled: true
    annotations:
        kubernetes.io/ingress.class: nginx
        cert-manager.io/cluster-issuer: letsencrypt-prod
    hostname: kimai.example.com
    tls: true
updateStrategy:
    type: Recreate
configuration: |-
    kimai:
      user:
        registration: false
      saml:
        provider: zitadel
        activate: true
        title: Login with auth
        mapping:
          - { saml: $Email, kimai: email }
          - { saml: $FirstName $SurName, kimai: alias }
        roles:
          resetOnLogin: true
          attribute: Roles
          mapping:
            - { saml: Admin, kimai: ROLE_ADMIN }
            - { saml: Management, kimai: ROLE_TEAMLEAD }
        connection:
          idp:
            entityId: "https://auth.example.com/saml/v2/metadata"
            singleSignOnService:
              url: "https://auth.example.com/saml/v2/SSO"
              binding: "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect"
            x509cert: "CERT"
          sp:
            entityId: "https://kimai.example.com/"
            assertionConsumerService:
              url: "https://kimai.example.com/auth/saml/acs"
              binding: "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"
            singleLogoutService:
              url: "https://kimai.example.com/auth/saml/logout"
              binding: "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect"
          baseurl: "https://kimai.example.com/auth/saml/"
          strict: false
          debug: true
          security:
            nameIdEncrypted: false
            authnRequestsSigned: false
            logoutRequestSigned: false
            logoutResponseSigned: false
            wantMessagesSigned: false
            wantAssertionsSigned: false
            wantNameIdEncrypted: false
            requestedAuthnContext: true
            signMetadata: false
            wantXMLValidation: true
            signatureAlgorithm: "http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"
            digestAlgorithm: "http://www.w3.org/2001/04/xmlenc#sha256"
          contactPerson:
            technical:
              givenName: "Kimai Admin"
              emailAddress: "support@example.com"
            support:
              givenName: "Kimai Support"
              emailAddress: "support@example.com"
          organization:
            en:
              name: "kimai"
              displayname: "Kimai"
              url: "https://kimai.example.com"

Then we run helm upgrade -i kimai -f values.yaml --create-namespace -n kimai robjuz/kimai2.

The K8s cluster itself runs on an OpenStack, which we also run ourselves. We have our own data center.