infinispan / infinispan-helm-charts

Apache License 2.0
14 stars 29 forks source link

disabling auth doesn't seem to work #74

Open makdeniss opened 1 year ago

makdeniss commented 1 year ago

When trying to disable auth fully and trying to query cache using quarkus client the following error occurs: ISPN005003: Exception reported java.lang.SecurityException: ISPN000287: Unauthorized access: subject 'null' lacks 'CREATE' permission

This is the chart values config:

infinispan:
  # Default values for infinispan-helm-charts.
  # This is a YAML-formatted file.
  # Declare variables to be passed into your templates.

  images:
    # [USER] The container images for server pods.
    server: quay.io/infinispan/server:14.0
    initContainer: registry.access.redhat.com/ubi8-micro

  deploy:
    # [USER] Specify the number of nodes in the cluster.
    replicas: 1

    container:
      extraJvmOpts: ""
      storage:
        size: 1Gi
        storageClassName: ""
        # [USER] Set `ephemeral: true` to delete all persisted data when clusters shut down or restart.
        ephemeral: false
      resources:
        # [USER] Specify the CPU limit and the memory limit for each pod.
        limits:
          cpu: 500m
          memory: 512Mi
        # [USER] Specify the maximum CPU requests and the maximum memory requests for each pod.
        requests:
          cpu: 500m
          memory: 512Mi

    security:
      secretName: ""
      batch: ""

    expose:
      # [USER] Specify `type: ""` to disable network access to clusters.
      type: ""
      nodePort: 0
      host: ""
      annotations: [ ]

    monitoring:
      enabled: true

    logging:
      categories:
        # [USER] Specify the FQN of a package from which you want to collect logs.
        - category: com.arjuna
          # [USER] Specify the level of log messages.
          level: warn
        # No need to warn about not being able to TLS/SSL handshake
        - category: io.netty.handler.ssl.ApplicationProtocolNegotiationHandler
          level: error

    makeDataDirWritable: false

    nameOverride: ""

    resourceLabels: [ ]

    podLabels: [ ]

    svcLabels: [ ]

    infinispan:
      cacheContainer:
        # [USER] Add cache, template, and counter configuration.
        name: default
        # [USER] Specify `security: null` to disable security authorization.
        security: null
        transport:
          cluster: ${infinispan.cluster.name:cluster}
          node-name: ${infinispan.node.name:}
          stack: kubernetes
      server:
        endpoints:
          # [USER] Hot Rod and REST endpoints.
          - securityRealm: default
            socketBinding: default
            connectors:
              rest:
                restConnector:
              hotrod:
                hotrodConnector:
              # [MEMCACHED] Uncomment to enable Memcached endpoint
              # memcached:
              #   memcachedConnector:
              #     socketBinding: memcached
          # [METRICS] Metrics endpoint for cluster monitoring capabilities.
          - connectors:
              rest:
                restConnector:
                  authentication:
                    mechanisms: BASIC
            securityRealm: metrics
            socketBinding: metrics
        interfaces:
          - inetAddress:
              value: ${infinispan.bind.address:127.0.0.1}
            name: public
        security:
          credentialStores:
            - clearTextCredential:
                clearText: secret
              name: credentials
              path: credentials.pfx
          securityRealms:
            # [USER] Security realm for the Hot Rod and REST endpoints.
            - name: default
              # [USER] Comment or remove this properties realm to disable authentication.
#              propertiesRealm:
#                groupProperties:
#                  path: groups.properties
#                groupsAttribute: Roles
#                userProperties:
#                  path: users.properties
              # [METRICS] Security realm for the metrics endpoint.
            - name: metrics
              propertiesRealm:
                groupProperties:
                  path: metrics-groups.properties
                  relativeTo: infinispan.server.config.path
                groupsAttribute: Roles
                userProperties:
                  path: metrics-users.properties
                  relativeTo: infinispan.server.config.path
        socketBindings:
          defaultInterface: public
          portOffset: ${infinispan.socket.binding.port-offset:0}
          socketBinding:
            # [USER] Socket binding for the Hot Rod and REST endpoints.
            - name: default
              port: 11222
              # [METRICS] Socket binding for the metrics endpoint.
            - name: metrics
              port: 11223
              # [MEMCACHED] Uncomment to enable Memcached endpoint
          # - name: memcached
          #   port: 11221

Its an empty infinispan instance, so client should be able to create the cache automatically. This is the case when using infinispan via docker compose with a custom config where security is disabled as per docs: https://infinispan.org/docs/stable/titles/security/security.html

Also if I examine the infinispan.xml settings file inside the container I can see that it still contains the default auth enabled settings. So that means that the above config to disable security had no effect or I did it incorrectly.

ryanemerson commented 1 year ago

I have deployed a helm chart using your values.yaml [0] and I can confirm that authentication is disabled and I was able to create a cache via the Infinispan console [1].

Also if I examine the infinispan.xml settings file inside the container I can see that it still contains the default auth enabled settings. So that means that the above config to disable security had no effect or I did it incorrectly.

The configuration provided via your values.yaml is mounted at /etc/config/infinispan.yml and the server args are updated to use this configuration.

How have you configured the Quarkus client?

[0] I had to remove the initial "infinispan" element as this unexpected. [1] I used to port-forwarding to easily test this, kubectl -n <namespace> port-forward svc/<chart-name> 11222 and connect to http://localhost:11222 via your local browser.

makdeniss commented 1 year ago

[0] I had to remove the initial "infinispan" element as this unexpected.

What do you mean with this? Can you provide your values.yaml file for comparison?

With quarkus its pretty simple:

quarkus.infinispan-client.server-list=infinispan:11222
quarkus.infinispan-client.use-auth=false
quarkus.infinispan-client.client-intelligence=BASIC

As I mentioned before this works ok with docker-compose... so I guess something is wrong with my chart config.

ryanemerson commented 1 year ago

What do you mean with this? Can you provide your values.yaml file for comparison?

Sure:

# Default values for infinispan-helm-charts.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

images:
  # [USER] The container images for server pods.
  server: quay.io/infinispan/server:14.0
  initContainer: registry.access.redhat.com/ubi8-micro

deploy:
  # [USER] Specify the number of nodes in the cluster.
  replicas: 1

  container:
    extraJvmOpts: ""
    storage:
      size: 1Gi
      storageClassName: ""
      # [USER] Set `ephemeral: true` to delete all persisted data when clusters shut down or restart.
      ephemeral: false
    resources:
      # [USER] Specify the CPU limit and the memory limit for each pod.
      limits:
        cpu: 500m
        memory: 512Mi
      # [USER] Specify the maximum CPU requests and the maximum memory requests for each pod.
      requests:
        cpu: 500m
        memory: 512Mi

  security:
    secretName: ""
    batch: ""

  expose:
    # [USER] Specify `type: ""` to disable network access to clusters.
    type: ""
    nodePort: 0
    host: ""
    annotations: [ ]

  monitoring:
    enabled: true

  logging:
    categories:
      # [USER] Specify the FQN of a package from which you want to collect logs.
      - category: com.arjuna
        # [USER] Specify the level of log messages.
        level: warn
      # No need to warn about not being able to TLS/SSL handshake
      - category: io.netty.handler.ssl.ApplicationProtocolNegotiationHandler
        level: error

  makeDataDirWritable: false

  nameOverride: ""

  resourceLabels: [ ]

  podLabels: [ ]

  svcLabels: [ ]

  infinispan:
    cacheContainer:
      # [USER] Add cache, template, and counter configuration.
      name: default
      # [USER] Specify `security: null` to disable security authorization.
      security: null
      transport:
        cluster: ${infinispan.cluster.name:cluster}
        node-name: ${infinispan.node.name:}
        stack: kubernetes
    server:
      endpoints:
        # [USER] Hot Rod and REST endpoints.
        - securityRealm: default
          socketBinding: default
          connectors:
            rest:
              restConnector:
            hotrod:
              hotrodConnector:
            # [MEMCACHED] Uncomment to enable Memcached endpoint
            # memcached:
            #   memcachedConnector:
            #     socketBinding: memcached
        # [METRICS] Metrics endpoint for cluster monitoring capabilities.
        - connectors:
            rest:
              restConnector:
                authentication:
                  mechanisms: BASIC
          securityRealm: metrics
          socketBinding: metrics
      interfaces:
        - inetAddress:
            value: ${infinispan.bind.address:127.0.0.1}
          name: public
      security:
        credentialStores:
          - clearTextCredential:
              clearText: secret
            name: credentials
            path: credentials.pfx
        securityRealms:
          # [USER] Security realm for the Hot Rod and REST endpoints.
          - name: default
            # [USER] Comment or remove this properties realm to disable authentication.
#              propertiesRealm:
#                groupProperties:
#                  path: groups.properties
#                groupsAttribute: Roles
#                userProperties:
#                  path: users.properties
            # [METRICS] Security realm for the metrics endpoint.
          - name: metrics
            propertiesRealm:
              groupProperties:
                path: metrics-groups.properties
                relativeTo: infinispan.server.config.path
              groupsAttribute: Roles
              userProperties:
                path: metrics-users.properties
                relativeTo: infinispan.server.config.path
      socketBindings:
        defaultInterface: public
        portOffset: ${infinispan.socket.binding.port-offset:0}
        socketBinding:
          # [USER] Socket binding for the Hot Rod and REST endpoints.
          - name: default
            port: 11222
            # [METRICS] Socket binding for the metrics endpoint.
          - name: metrics
            port: 11223
            # [MEMCACHED] Uncomment to enable Memcached endpoint
        # - name: memcached
        #   port: 11221
makdeniss commented 1 year ago

ok, so I am doing it a bit different and my values.yaml contains other values for my main deploy of which the infinispan chart is part of. Here's an example:

# Default values for scale-bootstrap.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

dockerRegistry: "xxxxx"

namespace: "xxxx"

appVersion: "1.0.0-SNAPSHOT"

helm:
  release:
    namespace:
      suffix:

spec:
  type:
    service: LoadBalancer
  containers:
    imagePullPolicy: Always

ports:
  service: 80

quarkus:
  log:
    level: INFO
    min-level: DEBUG

  infinispan-client:
    server-list: infinispan:11222
    client-intelligence: BASIC

infinispan:
  # Default values for infinispan-helm-charts.
  # This is a YAML-formatted file.
  # Declare variables to be passed into your templates.

  images:
    # [USER] The container images for server pods.
    server: quay.io/infinispan/server:14.0
    initContainer: registry.access.redhat.com/ubi8-micro

  deploy:
    # [USER] Specify the number of nodes in the cluster.
    replicas: 1

    container:
      extraJvmOpts: ""
      storage:
        size: 1Gi
        storageClassName: ""
        # [USER] Set `ephemeral: true` to delete all persisted data when clusters shut down or restart.
        ephemeral: false
      resources:
        # [USER] Specify the CPU limit and the memory limit for each pod.
        limits:
          cpu: 500m
          memory: 512Mi
        # [USER] Specify the maximum CPU requests and the maximum memory requests for each pod.
        requests:
          cpu: 500m
          memory: 512Mi

    security:
      secretName: ""
      batch: "user create -p password" # FIXME: disable auth or find a way to fetch these from secrets

    expose:
      # [USER] Specify `type: ""` to disable network access to clusters.
      type: ""
      nodePort: 0
      host: ""
      annotations: [ ]

    monitoring:
      enabled: true

    logging:
      categories:
        # [USER] Specify the FQN of a package from which you want to collect logs.
        - category: com.arjuna
          # [USER] Specify the level of log messages.
          level: warn
        # No need to warn about not being able to TLS/SSL handshake
        - category: io.netty.handler.ssl.ApplicationProtocolNegotiationHandler
          level: error

    makeDataDirWritable: false

    nameOverride: "mcng-access-infinispan"

    resourceLabels: [ ]

    podLabels: [ ]

    svcLabels: [ ]

    infinispan:
    ....

I cannot remove the infinispan as you did, as it will not pick up the infinispan chart dep schema...

ryanemerson commented 1 year ago

ok, so I am doing it a bit different and my values.yaml contains other values for my main deploy of which the infinispan chart is part of.

Maybe this is a common strategy that I'm not familiar with, but won't this break the Infinispan .tpl logic? For example our template logic depends upon variables such as {{ .Values.deploy.replicas }}. Or are you also pull the templates locally and updating them?

makdeniss commented 1 year ago

I don't update the tpl logic. As I understood we can override the chart dependency value using the name of the chart as I defined it above. It seems to work (at least partially), as the nameOverride: "something" indeed changes the name of the pod and I also see the deployment react to security changes where I just define a user and password in the values.yaml

Example:

# Default values for scale-bootstrap.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

dockerRegistry: "xxxx"

helm:
  release:
    namespace:
      suffix:

spec:
  type:
    service: LoadBalancer
  containers:
    imagePullPolicy: Always

ports:
  service: 80

quarkus:
  log:
    level: INFO
    min-level: DEBUG

  infinispan-client:
    server-list: infinispan:11222
    client-intelligence: BASIC

infinispan:
  # Default values for infinispan-helm-charts.
  # This is a YAML-formatted file.
  # Declare variables to be passed into your templates.

  images:
    # [USER] The container images for server pods.
    server: quay.io/infinispan/server:14.0
    initContainer: registry.access.redhat.com/ubi8-micro

  deploy:
    # [USER] Specify the number of nodes in the cluster.
    replicas: 1

    container:
      extraJvmOpts: ""
      storage:
        size: 1Gi
        storageClassName: ""
        # [USER] Set `ephemeral: true` to delete all persisted data when clusters shut down or restart.
        ephemeral: false
      resources:
        # [USER] Specify the CPU limit and the memory limit for each pod.
        limits:
          cpu: 500m
          memory: 512Mi
        # [USER] Specify the maximum CPU requests and the maximum memory requests for each pod.
        requests:
          cpu: 500m
          memory: 512Mi

    security:
      secretName: ""
      batch: "user create admin -p password"

    expose:
      # [USER] Specify `type: ""` to disable network access to clusters.
      type: ""
      nodePort: 0
      host: ""
      annotations: [ ]

    monitoring:
      enabled: true

    logging:
      categories:
        # [USER] Specify the FQN of a package from which you want to collect logs.
        - category: com.arjuna
          # [USER] Specify the level of log messages.
          level: warn
        # No need to warn about not being able to TLS/SSL handshake
        - category: io.netty.handler.ssl.ApplicationProtocolNegotiationHandler
          level: error

    makeDataDirWritable: false

    nameOverride: "something"

    resourceLabels: [ ]

    podLabels: [ ]

    svcLabels: [ ]

    infinispan:
      cacheContainer:
        # [USER] Add cache, template, and counter configuration.
        name: default
        # [USER] Specify `security: null` to disable security authorization.
        security:
          authorization: { }
        transport:
          cluster: ${infinispan.cluster.name:cluster}
          node-name: ${infinispan.node.name:}
          stack: kubernetes
      server:
        endpoints:
          # [USER] Hot Rod and REST endpoints.
          - securityRealm: default
            socketBinding: default
            connectors:
              rest:
                restConnector:
              hotrod:
                hotrodConnector:
              # [MEMCACHED] Uncomment to enable Memcached endpoint
              # memcached:
              #   memcachedConnector:
              #     socketBinding: memcached
          # [METRICS] Metrics endpoint for cluster monitoring capabilities.
          - connectors:
              rest:
                restConnector:
                  authentication:
                    mechanisms: BASIC
            securityRealm: metrics
            socketBinding: metrics
        interfaces:
          - inetAddress:
              value: ${infinispan.bind.address:127.0.0.1}
            name: public
        security:
          credentialStores:
            - clearTextCredential:
                clearText: secret
              name: credentials
              path: credentials.pfx
          securityRealms:
            # [USER] Security realm for the Hot Rod and REST endpoints.
            - name: default
              # [USER] Comment or remove this properties realm to disable authentication.
              propertiesRealm:
                groupProperties:
                  path: groups.properties
                groupsAttribute: Roles
                userProperties:
                  path: users.properties
              # [METRICS] Security realm for the metrics endpoint.
            - name: metrics
              propertiesRealm:
                groupProperties:
                  path: metrics-groups.properties
                  relativeTo: infinispan.server.config.path
                groupsAttribute: Roles
                userProperties:
                  path: metrics-users.properties
                  relativeTo: infinispan.server.config.path
        socketBindings:
          defaultInterface: public
          portOffset: ${infinispan.socket.binding.port-offset:0}
          socketBinding:
            # [USER] Socket binding for the Hot Rod and REST endpoints.
            - name: default
              port: 11222
              # [METRICS] Socket binding for the metrics endpoint.
            - name: metrics
              port: 11223
              # [MEMCACHED] Uncomment to enable Memcached endpoint
          # - name: memcached
          #   port: 11221

So above deployment works without any trouble. In result we have an infinispan pod with admin user "admin" with password set to "password"...

ryanemerson commented 1 year ago

@makdeniss I see that you have infinispan.security.authorization: { }, this should be null not an empty object. This could explain why requests are being rejected even though authentication is not configured.

makdeniss commented 1 year ago

@ryanemerson the latest example is meant to show you that I am defining that chart correctly when I want to create an admin user. The first post example is the one that doesn't work. So my definition is correct overall, but the security disable is not working if I do it the way I shown in the initial post.

ryanemerson commented 1 year ago

@makdeniss I see. As disabling authentication works with the values.yaml I provided, I can only assume that the modified values.yaml structure is causing issues somehow. Can you paste the output of /etc/config/infinispan.yml? This is also contained within the <chart-name>-configuration ConfigMap.

Also, if you enable debug logging for "org.infinispan.SERVER" the pod output will show the exact configuration that is parsed by the server:

  logging:
    categories:
      - category: org.infinispan.SERVER
        level: debug

Can you provide a full server log?

makdeniss commented 1 year ago

@ryanemerson sorry for the delay. Here's the infinispan.yml file from etc/config when using the chart values with security disabled

infinispan:
  cacheContainer:
    name: default
    security:
      authorization: {}
    transport:
      cluster: ${infinispan.cluster.name:cluster}
      node-name: ${infinispan.node.name:}
      stack: kubernetes
  server:
    endpoints:
    - connectors:
        hotrod:
          hotrodConnector: null
        rest:
          restConnector: null
      securityRealm: default
      socketBinding: default
    - connectors:
        rest:
          restConnector:
            authentication:
              mechanisms: BASIC
      securityRealm: metrics
      socketBinding: metrics
    interfaces:
    - inetAddress:
        value: ${infinispan.bind.address:127.0.0.1}
      name: public
    security:
      credentialStores:
      - clearTextCredential:
          clearText: secret
        name: credentials
        path: credentials.pfx
      securityRealms:
      - name: default
      - name: metrics
        propertiesRealm:
          groupProperties:
            path: metrics-groups.properties
            relativeTo: infinispan.server.config.path
          groupsAttribute: Roles
          userProperties:
            path: metrics-users.properties
            relativeTo: infinispan.server.config.path
    socketBindings:
      defaultInterface: public
      portOffset: ${infinispan.socket.binding.port-offset:0}
      socketBinding:
      - name: default
        port: 11222
      - name: metrics
        port: 11223

To me it seems that the configuration is ignoring the setup for disabling of security :/ The configuration inside the infinispan container is also containing security settings as if they were intact unfortunately :/

Infinispan server logs

2023-04-11 12:50:30,035 INFO  (main) [BOOT] JVM OpenJDK 64-Bit Server VM Red Hat, Inc. 17.0.6+10-LTS
2023-04-11 12:50:30,039 INFO  (main) [BOOT] JVM arguments = [-server, --add-exports, java.naming/com.sun.jndi.ldap=ALL-UNNAMED, -Xlog:gc*:file=/opt/infinispan/server/log/gc.log:time,uptimemillis:filecount=5,filesize=3M, -Djgroups.dns.query=infinispan-ping.svc.cluster.local, -Xmx256m, -XX:+ExitOnOutOfMemoryError, -XX:MetaspaceSize=32m, -XX:MaxMetaspaceSize=96m, -Djava.net.preferIPv4Stack=true, -Djava.awt.headless=true, -Dvisualvm.display.name=infinispan-server, -Djava.util.logging.manager=org.infinispan.server.loader.LogManager, -Dinfinispan.server.home.path=/opt/infinispan, -classpath, :/opt/infinispan/boot/infinispan-server-runtime-14.0.8.Final-loader.jar, org.infinispan.server.loader.Loader, org.infinispan.server.Bootstrap, --cluster-name=infinispan, --server-config=/etc/config/infinispan.yml, --logging-config=/etc/config/log4j2.xml, --bind-address=0.0.0.0]
2023-04-11 12:50:30,040 INFO  (main) [BOOT] PID = 173
2023-04-11 12:50:30,074 INFO  (main) [org.infinispan.SERVER] ISPN080000: Infinispan Server 14.0.8.Final starting
2023-04-11 12:50:30,074 INFO  (main) [org.infinispan.SERVER] ISPN080017: Server configuration: /etc/config/infinispan.yml
2023-04-11 12:50:30,074 INFO  (main) [org.infinispan.SERVER] ISPN080032: Logging configuration: /etc/config/log4j2.xml
2023-04-11 12:50:30,462 DEBUG (main) [org.infinispan.SERVER] Using endpoint realm "default" for Hot Rod
2023-04-11 12:50:30,510 DEBUG (main) [org.infinispan.SERVER] Actual configuration: <?xml version="1.0"?>
<infinispan xmlns="urn:infinispan:config:14.0">
    <jgroups transport="org.infinispan.remoting.transport.jgroups.JGroupsTransport"/>
    <cache-container name="default" shutdown-hook="DONT_REGISTER" statistics="false">
        <transport cluster="infinispan" node-name="" stack="kubernetes"/>
        <security>
            <authorization audit-logger="org.infinispan.security.audit.LoggingAuditLogger">
                <cluster-role-mapper/>
                <roles>
                    <role name="observer" permissions="ALL_READ MONITOR"/>
                    <role name="___script_manager" permissions="CREATE"/>
                    <role name="application" permissions="ALL_WRITE ALL_READ LISTEN MONITOR EXEC"/>
                    <role name="admin" permissions="ALL"/>
                    <role name="monitor" permissions="MONITOR"/>
                    <role name="deployer" permissions="CREATE ALL_WRITE ALL_READ LISTEN MONITOR EXEC"/>
                    <role name="___schema_manager" permissions="CREATE"/>
                </roles>
            </authorization>
        </security>
        <global-state>
            <persistent-location path="/opt/infinispan/server/data"/>
            <shared-persistent-location path="/opt/infinispan/server/data"/>
            <overlay-configuration-storage/>
        </global-state>
        <caches>
            <replicated-cache-configuration name="org.infinispan.REPL_ASYNC" mode="ASYNC" statistics="true">
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
                <state-transfer timeout="60000"/>
            </replicated-cache-configuration>
            <scattered-cache-configuration name="org.infinispan.SCATTERED_SYNC" invalidation-batch-size="128" bias-acquisition="ON_WRITE" bias-lifespan="300000" mode="SYNC" remote-timeout="17500" statistics="true">
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
            </scattered-cache-configuration>
            <distributed-cache-configuration name="org.infinispan.DIST_SYNC" mode="SYNC" remote-timeout="17500" statistics="true">
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
                <state-transfer timeout="60000"/>
            </distributed-cache-configuration>
            <invalidation-cache-configuration name="org.infinispan.INVALIDATION_ASYNC" mode="ASYNC" statistics="true">
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
            </invalidation-cache-configuration>
            <local-cache-configuration name="org.infinispan.LOCAL" statistics="true">
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
            </local-cache-configuration>
            <invalidation-cache-configuration name="org.infinispan.INVALIDATION_SYNC" mode="SYNC" remote-timeout="17500" statistics="true">
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
            </invalidation-cache-configuration>
            <replicated-cache-configuration name="org.infinispan.REPL_SYNC" mode="SYNC" remote-timeout="17500" statistics="true">
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
                <state-transfer timeout="60000"/>
            </replicated-cache-configuration>
            <distributed-cache-configuration name="example.PROTOBUF_DIST" mode="SYNC" remote-timeout="17500" statistics="true">
                <encoding media-type="application/x-protostream"/>
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
                <state-transfer timeout="60000"/>
            </distributed-cache-configuration>
            <distributed-cache-configuration name="org.infinispan.DIST_ASYNC" mode="ASYNC" statistics="true">
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
                <state-transfer timeout="60000"/>
            </distributed-cache-configuration>
        </caches>
    </cache-container>
    <server xmlns="urn:infinispan:server:14.0">
        <interfaces>
            <interface name="public">
                <inet-address value="0.0.0.0"/>
            </interface>
        </interfaces>
        <socket-bindings port-offset="0" default-interface="public">
            <socket-binding name="default" port="11222" interface="public"/>
            <socket-binding name="metrics" port="11223" interface="public"/>
        </socket-bindings>
        <security>
            <credential-stores>
                <credential-store name="credentials" path="credentials.pfx">
                    <clear-text-credential credential="***"/>
                </credential-store>
            </credential-stores>
            <security-realms>
                <security-realm name="default"/>
                <security-realm name="metrics">
                    <properties-realm groups-attribute="Roles">
                        <user-properties digest-realm-name="metrics" path="metrics-users.properties"/>
                        <group-properties path="metrics-groups.properties"/>
                    </properties-realm>
                </security-realm>
            </security-realms>
        </security>
        <endpoints>
            <endpoint socket-binding="default" security-realm="default">
                <hotrod-connector name="hotrod-default" socket-binding="default"/>
                <rest-connector name="rest-default" socket-binding="default">
                    <authentication security-realm="default"/>
                </rest-connector>
            </endpoint>
            <endpoint socket-binding="metrics" security-realm="metrics">
                <rest-connector name="rest-metrics" socket-binding="metrics">
                    <authentication mechanisms="BASIC" security-realm="metrics"/>
                </rest-connector>
            </endpoint>
        </endpoints>
    </server>
</infinispan>

2023-04-11 12:50:30,561 INFO  (main) [org.infinispan.SERVER] ISPN080027: Loaded extension 'query-dsl-filter-converter-factory'
2023-04-11 12:50:30,561 INFO  (main) [org.infinispan.SERVER] ISPN080027: Loaded extension 'continuous-query-filter-converter-factory'
2023-04-11 12:50:30,562 INFO  (main) [org.infinispan.SERVER] ISPN080027: Loaded extension 'iteration-filter-converter-factory'
2023-04-11 12:50:30,563 WARN  (main) [org.infinispan.SERVER] ISPN080059: No script engines are available
2023-04-11 12:50:30,856 WARN  (main) [org.infinispan.PERSISTENCE] ISPN000554: jboss-marshalling is deprecated and planned for removal
2023-04-11 12:50:30,884 INFO  (main) [org.infinispan.CONTAINER] ISPN000556: Starting user marshaller 'org.infinispan.commons.marshall.ImmutableProtoStreamMarshaller'
2023-04-11 12:50:31,836 INFO  (main) [org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel `infinispan` with stack `kubernetes`
2023-04-11 12:50:31,837 INFO  (main) [org.jgroups.JChannel] local_addr: f61f0e98-6111-41b3-b324-6be7c542ece2, name: infinispan-0-21746
2023-04-11 12:50:31,852 INFO  (main) [org.jgroups.protocols.FD_SOCK2] server listening on *.57800
2023-04-11 12:50:33,864 INFO  (main) [org.jgroups.protocols.pbcast.GMS] infinispan-0-21746: no members discovered after 2001 ms: creating cluster as coordinator
2023-04-11 12:50:33,871 INFO  (main) [org.infinispan.CLUSTER] ISPN000094: Received new cluster view for channel infinispan: [infinispan-0-21746|0] (1) [infinispan-0-21746]
2023-04-11 12:50:33,956 INFO  (main) [org.infinispan.CLUSTER] ISPN000079: Channel `infinispan` local address is `infinispan-0-21746`, physical addresses are `[10.42.1.15:7800]`
2023-04-11 12:50:33,984 INFO  (main) [org.infinispan.CONTAINER] ISPN000390: Persisted state, version=14.0.8.Final timestamp=2023-04-11T12:50:33.983320909Z
2023-04-11 12:50:34,440 INFO  (main) [org.jboss.threads] JBoss Threads version 2.3.3.Final
2023-04-11 12:50:34,495 INFO  (main) [org.infinispan.CONTAINER] ISPN000104: Using EmbeddedTransactionManager
2023-04-11 12:50:35,041 WARN  (main) [org.infinispan.SERVER] ISPN080072: JMX remoting enabled without a default security realm. All connections will be rejected.
2023-04-11 12:50:35,071 INFO  (main) [org.infinispan.server.core.telemetry.TelemetryServiceFactory] ISPN000953: OpenTelemetry integration is disabled
2023-04-11 12:50:35,213 INFO  (ForkJoinPool.commonPool-worker-1) [org.infinispan.SERVER] ISPN080018: Started connector HotRod (internal)
2023-04-11 12:50:35,254 INFO  (main) [org.infinispan.SERVER] ISPN080018: Started connector REST (internal)
2023-04-11 12:50:35,277 INFO  (main) [org.infinispan.SERVER] Using transport: Epoll
2023-04-11 12:50:35,516 DEBUG (main) [org.infinispan.SERVER] REST EndpointRouter listening on 0.0.0.0:11222
2023-04-11 12:50:35,516 INFO  (main) [org.infinispan.SERVER] ISPN080004: Connector SinglePort (default) listening on 0.0.0.0:11222
2023-04-11 12:50:35,516 INFO  (main) [org.infinispan.SERVER] ISPN080034: Server 'infinispan-0-21746' listening on http://0.0.0.0:11222
2023-04-11 12:50:35,532 INFO  (main) [org.infinispan.SERVER] ISPN080018: Started connector REST (internal)
2023-04-11 12:50:35,532 INFO  (main) [org.infinispan.SERVER] Using transport: Epoll
2023-04-11 12:50:35,541 DEBUG (main) [org.infinispan.SERVER] REST EndpointRouter listening on 0.0.0.0:11223
2023-04-11 12:50:35,542 INFO  (main) [org.infinispan.SERVER] ISPN080004: Connector SinglePort (metrics) listening on 0.0.0.0:11223
2023-04-11 12:50:35,542 INFO  (main) [org.infinispan.SERVER] ISPN080034: Server 'infinispan-0-21746' listening on http://0.0.0.0:11223
2023-04-11 12:50:35,618 INFO  (main) [org.infinispan.SERVER] ISPN080001: Infinispan Server 14.0.8.Final started in 5543ms
ryanemerson commented 1 year ago

@makdeniss The pasted /etc/config/infinispan.yml looks like the default values.yaml provided with the Infinispan helm chart, so it seems like your nested infinispan config in your values.yaml is not being processed like you think.

makdeniss commented 1 year ago

@ryanemerson pretty sure it isn't, but why then the values.yaml file with a predefine admin user works correctly? I think I'm following the correct pattern and the schema validation works correct when overriding the dependency chart values in the format I do: infinispan.security.batch. The same for logging I set it inside the top level infinispan. I think there is a problem with the inner "infinispan" once you start defining things there... they are ignored.

Also fyi: https://stackoverflow.com/questions/55748639/set-value-in-dependency-of-helm-chart

ryanemerson commented 1 year ago

@makdeniss I have created a basic chart locally that uses Infinispan as a sub chart and disabling authentication works as expected from the parent values.yaml. The charts/infinispan/values.yaml is the default values.yaml included in this repo.

Project structure:

├── charts
│   └── infinispan
│       ├── Chart.yaml
│       ├── templates
│       │   ├── configmap.yaml
│       │   ├── helpers.tpl
│       │   ├── _log4j2.xml.tpl
│       │   ├── metrics-service.yaml
│       │   ├── ping-service.yaml
│       │   ├── route.yaml
│       │   ├── secret.yaml
│       │   ├── service-monitor.yaml
│       │   ├── service.yaml
│       │   ├── statefulset.yaml
│       │   └── tests
│       │       └── test-connection.yaml
│       └── values.yaml
├── Chart.yaml
└── values.yaml

Chart.yaml

apiVersion: v2
name: parent
version: 0.0.1

Parent values.yaml

dockerRegistry: "xxxx"

helm:
  release:
    namespace:
      suffix:

spec:
  type:
    service: LoadBalancer
  containers:
    imagePullPolicy: Always

ports:
  service: 80

quarkus:
  log:
    level: INFO
    min-level: DEBUG

  infinispan-client:
    server-list: infinispan:11222
    client-intelligence: BASIC

infinispan:
  # Default values for infinispan-helm-charts.
  # This is a YAML-formatted file.
  # Declare variables to be passed into your templates.

  images:
    # [USER] The container images for server pods.
    server: quay.io/infinispan/server:14.0
    initContainer: registry.access.redhat.com/ubi8-micro

  deploy:
    # [USER] Specify the number of nodes in the cluster.
    replicas: 1

    container:
      extraJvmOpts: ""
      storage:
        size: 1Gi
        storageClassName: ""
        # [USER] Set `ephemeral: true` to delete all persisted data when clusters shut down or restart.
        ephemeral: false
      resources:
        # [USER] Specify the CPU limit and the memory limit for each pod.
        limits:
          cpu: 500m
          memory: 512Mi
        # [USER] Specify the maximum CPU requests and the maximum memory requests for each pod.
        requests:
          cpu: 500m
          memory: 512Mi

    security:
      secretName: ""
      batch: ""

    expose:
      # [USER] Specify `type: ""` to disable network access to clusters.
      type: ""
      nodePort: 0
      host: ""
      annotations: [ ]

    monitoring:
      enabled: true

    logging:
      categories:
        # [USER] Specify the FQN of a package from which you want to collect logs.
        - category: com.arjuna
          # [USER] Specify the level of log messages.
          level: warn
        # No need to warn about not being able to TLS/SSL handshake
        - category: io.netty.handler.ssl.ApplicationProtocolNegotiationHandler
          level: error

        - category: org.infinispan.SERVER
          level:  debug

    makeDataDirWritable: false

    nameOverride: ""

    resourceLabels: [ ]

    podLabels: [ ]

    svcLabels: [ ]

    infinispan:
      cacheContainer:
        # [USER] Add cache, template, and counter configuration.
        name: default
        # [USER] Specify `security: null` to disable security authorization.
        security: null
        transport:
          cluster: ${infinispan.cluster.name:cluster}
          node-name: ${infinispan.node.name:}
          stack: kubernetes
      server:
        endpoints:
          # [USER] Hot Rod and REST endpoints.
          - securityRealm: default
            socketBinding: default
            connectors:
              rest:
                restConnector:
              hotrod:
                hotrodConnector:
          # [METRICS] Metrics endpoint for cluster monitoring capabilities.
          - connectors:
              rest:
                restConnector:
                  authentication:
                    mechanisms: BASIC
            securityRealm: metrics
            socketBinding: metrics
        interfaces:
          - inetAddress:
              value: ${infinispan.bind.address:127.0.0.1}
            name: public
        security:
          credentialStores:
            - clearTextCredential:
                clearText: secret
              name: credentials
              path: credentials.pfx
          securityRealms:
            # [USER] Security realm for the Hot Rod and REST endpoints.
            - name: default
              # [METRICS] Security realm for the metrics endpoint.
            - name: metrics
              propertiesRealm:
                groupProperties:
                  path: metrics-groups.properties
                  relativeTo: infinispan.server.config.path
                groupsAttribute: Roles
                userProperties:
                  path: metrics-users.properties
                  relativeTo: infinispan.server.config.path
        socketBindings:
          defaultInterface: public
          portOffset: ${infinispan.socket.binding.port-offset:0}
          socketBinding:
            # [USER] Socket binding for the Hot Rod and REST endpoints.
            - name: default
              port: 11222
              # [METRICS] Socket binding for the metrics endpoint.
            - name: metrics
              port: 11223

Server logs:

2023-04-11 14:46:27,639 INFO  (main) [BOOT] JVM OpenJDK 64-Bit Server VM Red Hat, Inc. 17.0.6+10-LTS
2023-04-11 14:46:27,645 INFO  (main) [BOOT] JVM arguments = [-server, --add-exports, java.naming/com.sun.jndi.ldap=ALL-UNNAMED, -Xlog:gc*:file=/opt/infinispan/server/log/gc.log:time,uptimemillis:filecount=5,filesize=3M, -Djgroups.dns.query=datagrid-ping.helm.svc.cluster.local, -XX:+ExitOnOutOfMemoryError, -XX:MetaspaceSize=32m, -XX:MaxMetaspaceSize=96m, -Djava.net.preferIPv4Stack=true, -Djava.awt.headless=true, -Dvisualvm.display.name=infinispan-server, -Djava.util.logging.manager=org.infinispan.server.loader.LogManager, -Dinfinispan.server.home.path=/opt/infinispan, -classpath, :/opt/infinispan/boot/infinispan-server-runtime-14.0.8.Final-loader.jar, org.infinispan.server.loader.Loader, org.infinispan.server.Bootstrap, --cluster-name=datagrid, --server-config=/etc/config/infinispan.yml, --logging-config=/etc/config/log4j2.xml, --bind-address=0.0.0.0]
2023-04-11 14:46:27,650 INFO  (main) [BOOT] PID = 162
2023-04-11 14:46:27,943 INFO  (main) [org.infinispan.SERVER] ISPN080000: Infinispan Server 14.0.8.Final starting
2023-04-11 14:46:27,944 INFO  (main) [org.infinispan.SERVER] ISPN080017: Server configuration: /etc/config/infinispan.yml
2023-04-11 14:46:27,945 INFO  (main) [org.infinispan.SERVER] ISPN080032: Logging configuration: /etc/config/log4j2.xml
2023-04-11 14:46:31,253 DEBUG (main) [org.infinispan.SERVER] Using endpoint realm "default" for Hot Rod
2023-04-11 14:46:31,355 DEBUG (main) [org.infinispan.SERVER] Actual configuration: <?xml version="1.0"?>
<infinispan xmlns="urn:infinispan:config:14.0">
    <jgroups transport="org.infinispan.remoting.transport.jgroups.JGroupsTransport"/>
    <cache-container name="default" shutdown-hook="DONT_REGISTER" statistics="false">
        <transport cluster="datagrid" node-name="" stack="kubernetes"/>
        <global-state>
            <persistent-location path="/opt/infinispan/server/data"/>
            <shared-persistent-location path="/opt/infinispan/server/data"/>
            <overlay-configuration-storage/>
        </global-state>
        <caches>
            <replicated-cache-configuration name="org.infinispan.REPL_ASYNC" mode="ASYNC" statistics="true">
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
                <state-transfer timeout="60000"/>
            </replicated-cache-configuration>
            <scattered-cache-configuration name="org.infinispan.SCATTERED_SYNC" invalidation-batch-size="128" bias-acquisition="ON_WRITE" bias-lifespan="300000" mode="SYNC" remote-timeout="17500" statistics="true">
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
            </scattered-cache-configuration>
            <distributed-cache-configuration name="org.infinispan.DIST_SYNC" mode="SYNC" remote-timeout="17500" statistics="true">
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
                <state-transfer timeout="60000"/>
            </distributed-cache-configuration>
            <invalidation-cache-configuration name="org.infinispan.INVALIDATION_ASYNC" mode="ASYNC" statistics="true">
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
            </invalidation-cache-configuration>
            <local-cache-configuration name="org.infinispan.LOCAL" statistics="true">
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
            </local-cache-configuration>
            <invalidation-cache-configuration name="org.infinispan.INVALIDATION_SYNC" mode="SYNC" remote-timeout="17500" statistics="true">
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
            </invalidation-cache-configuration>
            <replicated-cache-configuration name="org.infinispan.REPL_SYNC" mode="SYNC" remote-timeout="17500" statistics="true">
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
                <state-transfer timeout="60000"/>
            </replicated-cache-configuration>
            <distributed-cache-configuration name="example.PROTOBUF_DIST" mode="SYNC" remote-timeout="17500" statistics="true">
                <encoding media-type="application/x-protostream"/>
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
                <state-transfer timeout="60000"/>
            </distributed-cache-configuration>
            <distributed-cache-configuration name="org.infinispan.DIST_ASYNC" mode="ASYNC" statistics="true">
                <locking concurrency-level="1000" acquire-timeout="15000" striping="false"/>
                <state-transfer timeout="60000"/>
            </distributed-cache-configuration>
        </caches>
    </cache-container>
    <server xmlns="urn:infinispan:server:14.0">
        <interfaces>
            <interface name="public">
                <inet-address value="0.0.0.0"/>
            </interface>
        </interfaces>
        <socket-bindings port-offset="0" default-interface="public">
            <socket-binding name="default" port="11222" interface="public"/>
            <socket-binding name="metrics" port="11223" interface="public"/>
        </socket-bindings>
        <security>
            <credential-stores>
                <credential-store name="credentials" path="credentials.pfx">
                    <clear-text-credential credential="***"/>
                </credential-store>
            </credential-stores>
            <security-realms>
                <security-realm name="default"/>
                <security-realm name="metrics">
                    <properties-realm groups-attribute="Roles">
                        <user-properties digest-realm-name="metrics" path="metrics-users.properties"/>
                        <group-properties path="metrics-groups.properties"/>
                    </properties-realm>
                </security-realm>
            </security-realms>
        </security>
        <endpoints>
            <endpoint socket-binding="default" security-realm="default">
                <hotrod-connector name="hotrod-default" socket-binding="default"/>
                <rest-connector name="rest-default" socket-binding="default">
                    <authentication security-realm="default"/>
                </rest-connector>
            </endpoint>
            <endpoint socket-binding="metrics" security-realm="metrics">
                <rest-connector name="rest-metrics" socket-binding="metrics">
                    <authentication mechanisms="BASIC" security-realm="metrics"/>
                </rest-connector>
            </endpoint>
        </endpoints>
    </server>
</infinispan>

2023-04-11 14:46:32,048 INFO  (main) [org.infinispan.SERVER] ISPN080027: Loaded extension 'query-dsl-filter-converter-factory'
2023-04-11 14:46:32,048 INFO  (main) [org.infinispan.SERVER] ISPN080027: Loaded extension 'continuous-query-filter-converter-factory'
2023-04-11 14:46:32,050 INFO  (main) [org.infinispan.SERVER] ISPN080027: Loaded extension 'iteration-filter-converter-factory'
2023-04-11 14:46:32,051 WARN  (main) [org.infinispan.SERVER] ISPN080059: No script engines are available
2023-04-11 14:46:34,836 INFO  (main) [org.infinispan.CONTAINER] ISPN000556: Starting user marshaller 'org.infinispan.commons.marshall.ImmutableProtoStreamMarshaller'
2023-04-11 14:46:35,343 WARN  (main) [org.infinispan.PERSISTENCE] ISPN000554: jboss-marshalling is deprecated and planned for removal
2023-04-11 14:46:36,836 INFO  (main) [org.infinispan.CONTAINER] ISPN000389: Loaded global state, version=14.0.8.Final timestamp=2023-04-11T14:45:13.736511297Z
2023-04-11 14:46:38,634 INFO  (main) [org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel `datagrid` with stack `kubernetes`
2023-04-11 14:46:38,637 INFO  (main) [org.jgroups.JChannel] local_addr: 66a9d1f8-1abc-4742-aee8-73b3646a7c5f, name: datagrid-0-50401
2023-04-11 14:46:38,646 INFO  (main) [org.jgroups.protocols.FD_SOCK2] server listening on *.57800
2023-04-11 14:46:40,650 INFO  (main) [org.jgroups.protocols.pbcast.GMS] datagrid-0-50401: no members discovered after 2002 ms: creating cluster as coordinator
2023-04-11 14:46:40,658 INFO  (main) [org.infinispan.CLUSTER] ISPN000094: Received new cluster view for channel datagrid: [datagrid-0-50401|0] (1) [datagrid-0-50401]
2023-04-11 14:46:41,344 INFO  (main) [org.infinispan.CLUSTER] ISPN000079: Channel `datagrid` local address is `datagrid-0-50401`, physical addresses are `[10.244.0.12:7800]`
2023-04-11 14:46:42,938 INFO  (main) [org.jboss.threads] JBoss Threads version 2.3.3.Final
2023-04-11 14:46:43,139 INFO  (main) [org.infinispan.CONTAINER] ISPN000104: Using EmbeddedTransactionManager
2023-04-11 14:46:44,435 WARN  (main) [org.infinispan.SERVER] ISPN080072: JMX remoting enabled without a default security realm. All connections will be rejected.
2023-04-11 14:46:44,533 INFO  (main) [org.infinispan.server.core.telemetry.TelemetryServiceFactory] ISPN000953: OpenTelemetry integration is disabled
2023-04-11 14:46:45,145 INFO  (ForkJoinPool.commonPool-worker-1) [org.infinispan.SERVER] ISPN080018: Started connector HotRod (internal)
2023-04-11 14:46:45,356 INFO  (main) [org.infinispan.SERVER] ISPN080018: Started connector REST (internal)
2023-04-11 14:46:45,538 INFO  (main) [org.infinispan.SERVER] Using transport: Epoll
2023-04-11 14:46:45,651 DEBUG (main) [org.infinispan.SERVER] REST EndpointRouter listening on 0.0.0.0:11222
2023-04-11 14:46:45,652 INFO  (main) [org.infinispan.SERVER] ISPN080004: Connector SinglePort (default) listening on 0.0.0.0:11222
2023-04-11 14:46:45,652 INFO  (main) [org.infinispan.SERVER] ISPN080034: Server 'datagrid-0-50401' listening on http://0.0.0.0:11222
2023-04-11 14:46:45,747 INFO  (main) [org.infinispan.SERVER] ISPN080018: Started connector REST (internal)
2023-04-11 14:46:45,748 INFO  (main) [org.infinispan.SERVER] Using transport: Epoll
2023-04-11 14:46:45,750 DEBUG (main) [org.infinispan.SERVER] REST EndpointRouter listening on 0.0.0.0:11223
2023-04-11 14:46:45,750 INFO  (main) [org.infinispan.SERVER] ISPN080004: Connector SinglePort (metrics) listening on 0.0.0.0:11223
2023-04-11 14:46:45,750 INFO  (main) [org.infinispan.SERVER] ISPN080034: Server 'datagrid-0-50401' listening on http://0.0.0.0:11223
2023-04-11 14:46:46,148 INFO  (main) [org.infinispan.SERVER] ISPN080001: Infinispan Server 14.0.8.Final started in 18201ms

We can see that no properties-realm is defined for the default security realm implying that authentication is disabled. I also performed curl 127.0.0.1:11222/rest/v2/cache-managers/default/cache-configs/ on the deployed server pod without issue.

makdeniss commented 1 year ago

I don't understand why it doesn't work on my side just for this specific chart and only for disabling security... How is your infinispan dependency specified in the Chart.yaml dependencies param? Which versions of helm are you using?

ryanemerson commented 1 year ago

How is your infinispan dependency specified in the Chart.yaml dependencies param?

In the example above I manually copied the chart locally under charts/infinispan, however I have also tried using the below dependency and it worked:

dependencies:
  - name: infinispan
    version: 0.3.0
    repository: https://charts.openshift.io/

Unfortunately due to a limitation with charts.openshift.io, I had to perform the following steps.

Which versions of helm are you using?

version.BuildInfo{Version:"v3.5.4", GitCommit:"1b5edb69df3d3a08df77c9902dc17af864ff05d1", GitTreeState:"clean", GoVersion:"go1.15.11"}

Can you share your Chart.yaml and directory structure?

makdeniss commented 1 year ago

Here's my Chart.yml

apiVersion: v2
name: xx
description: A Helm chart for application modules
version: 0.1.1
appVersion: "0.0.1"

dependencies:
  # https://github.com/infinispan/infinispan-helm-charts/issues/60
  - name: infinispan
    alias: infinispan
    version: 0.3.0
    repository: https://charts.openshift.io/

And the directory structure

├── Chart.yaml
├── charts
│   └── infinispan-0.3.0.tgz
├── config
│   ├── xx_Dev.xml
│   └── xx.xml
├── develop-values.yaml
├── local-values.yaml
├── templates
│   ├── xx-config-map.yaml
│   ├── xx-ingress.yaml
│   ├── xx-secrets.yaml
│   ├── xx-service-deployment.yaml
│   └── xx-service.yaml
└── values.yaml
ryanemerson commented 1 year ago

Hmm I'm really not sure why it works in my case and not yours.

How are you installing the chart?

makdeniss commented 1 year ago

btw my helm version:

version.BuildInfo{Version:"v3.9.4", GitCommit:"dbc6d8e20fe1d58d50e6ed30f09a04a77e4c68db", GitTreeState:"clean", GoVersion:"go1.19"}

installing like this:

helm install xxxx ./ -f ./local-values.yaml --set quarkus.env.applicationInsights.connectionString= --set appVersion=1.0.0-SNAPSHOT --set helm.release.namespace.suffix= --set ingress.dynamicPath= --namespace xxx-xxx --debug

I asked another person to verify this setup on his side. Maybe I'm special /shrug