infinispan / infinispan-helm-charts

Apache License 2.0
13 stars 26 forks source link

After Deploy infinispan don't work correcrtly #93

Closed oburd closed 7 months ago

oburd commented 8 months ago

Hello! There is a question i have deployed infinispan But have a lot of lags in console, can you tell me what the problem ? It's freez on loading when you click on any tabs no loads as you see on screenshot I send you my value file

# Default values for infinispan-helm-charts.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

images:
  # [USER] The container images for server pods.
  server: quay.io/infinispan/server:14.0
  initContainer: registry.access.redhat.com/ubi8-micro

deploy:
  # [USER] Specify the number of nodes in the cluster.
  replicas: 2

  clusterDomain: cluster.local

  container:
    extraJvmOpts: ""
    libraries: ""
    # [USER] Define custom environment variables using standard K8s format
    # env:
    #  - name: STANDARD_KEY
    #    value: standard value
    #  - name: CONFIG_MAP_KEY
    #    valueFrom:
    #      configMapKeyRef:
    #        name: special-config
    #        key: special.how
    #  - name: SECRET_KEY
    #    valueFrom:
    #      secretKeyRef:
    #        name: special-secret
    #        key: special.how
    env:
    storage:
      size: 1Gi
      storageClassName: ""
      # [USER] Set `ephemeral: true` to delete all persisted data when clusters shut down or restart.
      ephemeral: true
    resources:
      # [USER] Specify the CPU limit and the memory limit for each pod.
      limits:
        cpu: 1000m
        memory: 1024Mi
      # [USER] Specify the maximum CPU requests and the maximum memory requests for each pod.
      requests:
        cpu: 1000m
        memory: 1024Mi

  security:
    secretName: ""
    batch: ""

  expose:
    # [USER] Specify `type: ""` to disable network access to clusters.
    type: Route
    nodePort: 0
    host: dummy
    annotations:
      - key: kubernetes.io/ingress.class
        value: alb
      - key: alb.ingress.kubernetes.io/group.name
        value: dummy
      - key: alb.ingress.kubernetes.io/group.order
        value: dummy
      - key: alb.ingress.kubernetes.io/scheme
        value: internal
      - key: alb.ingress.kubernetes.io/target-type
        value: ip
      - key: alb.ingress.kubernetes.io/listen-ports
        value: '[{"HTTP": 80}, {"HTTPS":443}]'
      - key: alb.ingress.kubernetes.io/certificate-arn
        value: dummy
      - key: alb.ingress.kubernetes.io/ssl-redirect
        value: '443'

  monitoring:
    enabled: false

  logging:
    categories:
      # [USER] Specify the FQN of a package from which you want to collect logs.
      - category: com.arjuna
        # [USER] Specify the level of log messages.
        level: warn
      # No need to warn about not being able to TLS/SSL handshake
      - category: io.netty.handler.ssl.ApplicationProtocolNegotiationHandler
        level: error

  makeDataDirWritable: false

  nameOverride: ""

  resourceLabels: []

  podLabels:
    - key: microservice
      value: infinispan

  svcLabels: []

  tolerations: []

  nodeAffinity: {}

  nodeSelector: {}

  infinispan:
    cacheContainer:
      # [USER] Add cache, template, and counter configuration.
      name: default
      # [USER] Specify `security: null` to disable security authorization.
      security:
        authorization: {}
      transport:
        cluster: ${infinispan.cluster.name:cluster}
        node-name: ${infinispan.node.name:}
        stack: kubernetes
    server:
      endpoints:
      # [USER] Hot Rod and REST endpoints.
      - securityRealm: default
        socketBinding: default
        connectors:
          rest:
            restConnector:
          hotrod:
            hotrodConnector:
          # [MEMCACHED] Uncomment to enable Memcached endpoint
          # memcached:
          #   memcachedConnector:
          #     socketBinding: memcached
      # [METRICS] Metrics endpoint for cluster monitoring capabilities.
      - connectors:
          rest:
            restConnector:
              authentication:
                mechanisms: BASIC
        securityRealm: metrics
        socketBinding: metrics
      interfaces:
      - inetAddress:
          value: ${infinispan.bind.address:127.0.0.1}
        name: public
      security:
        credentialStores:
        - clearTextCredential:
            clearText: secret
          name: credentials
          path: credentials.pfx
        securityRealms:
        # [USER] Security realm for the Hot Rod and REST endpoints.
        - name: default
          # [USER] Comment or remove this properties realm to disable authentication.
          propertiesRealm:
            groupProperties:
              path: groups.properties
            groupsAttribute: Roles
            userProperties:
              path: users.properties
          # [METRICS] Security realm for the metrics endpoint.
        - name: metrics
          propertiesRealm:
            groupProperties:
              path: metrics-groups.properties
              relativeTo: infinispan.server.config.path
            groupsAttribute: Roles
            userProperties:
              path: metrics-users.properties
              relativeTo: infinispan.server.config.path
      socketBindings:
        defaultInterface: public
        portOffset: ${infinispan.socket.binding.port-offset:0}
        socketBinding:
          # [USER] Socket binding for the Hot Rod and REST endpoints.
        - name: default
          port: 11222
          # [METRICS] Socket binding for the metrics endpoint.
        - name: metrics
          port: 11223
          # [MEMCACHED] Uncomment to enable Memcached endpoint
        # - name: memcached
        #   port: 11221
oburd commented 8 months ago

Hello! There is a log

08:25:38,381` WARN  [ServiceFinder] Skipping service: java.security.Provider: Provider org.wildfly.security.auth.client.WildFlyElytronClientDefaultSSLContextProvider could not be instantiated
08:25:38,482 WARN  [ServiceFinder] Skipping service: org.aesh.command.Command: org.infinispan.cli.commands.kubernetes.Delete$Cluster Unable to get public no-arg constructor
08:25:38,484 WARN  [ServiceFinder] Skipping service: org.aesh.command.Command: org.infinispan.cli.commands.kubernetes.Get$Clusters Unable to get public no-arg constructor

2023-11-06 08:25:45,392 INFO  (main) [BOOT] JVM OpenJDK 64-Bit Server VM Red Hat, Inc. 17.0.9+9-LTS
2023-11-06 08:25:45,405 INFO  (main) [BOOT] JVM arguments = [-server, --add-exports, java.naming/com.sun.jndi.ldap=ALL-UNNAMED, --add-opens, java.base/java.util=ALL-UNNAMED, --add-opens, java.base/java.util.concurrent=ALL-UNNAMED, -Xlog:gc*:file=/opt/infinispan/server/log/gc.log:time,uptimemillis:filecount=5,filesize=3M, -Djgroups.dns.query=infinispan-ping.keycloak.svc.cluster.local, -Xmx512m, -XX:+ExitOnOutOfMemoryError, -XX:MetaspaceSize=32m, -XX:MaxMetaspaceSize=96m, -Djava.net.preferIPv4Stack=true, -Djava.awt.headless=true, -Dvisualvm.display.name=infinispan-server, -Djava.util.logging.manager=org.infinispan.server.loader.LogManager, -Dinfinispan.server.home.path=/opt/infinispan, -classpath, :/opt/infinispan/boot/infinispan-server-runtime-14.0.19.Final-loader.jar, org.infinispan.server.loader.Loader, org.infinispan.server.Bootstrap, --cluster-name=infinispan, --server-config=/etc/config/infinispan.yml, --logging-config=/etc/config/log4j2.xml, --bind-address=0.0.0.0]
2023-11-06 08:25:45,410 INFO  (main) [BOOT] PID = 170
2023-11-06 08:25:45,533 INFO  (main) [org.infinispan.SERVER] ISPN080000: Infinispan Server 14.0.19.Final starting
2023-11-06 08:25:45,534 INFO  (main) [org.infinispan.SERVER] ISPN080017: Server configuration: /etc/config/infinispan.yml
2023-11-06 08:25:45,534 INFO  (main) [org.infinispan.SERVER] ISPN080032: Logging configuration: /etc/config/log4j2.xml
2023-11-06 08:25:47,267 INFO  (main) [org.infinispan.SERVER] ISPN080027: Loaded extension 'query-dsl-filter-converter-factory'
2023-11-06 08:25:47,267 INFO  (main) [org.infinispan.SERVER] ISPN080027: Loaded extension 'continuous-query-filter-converter-factory'
2023-11-06 08:25:47,276 INFO  (main) [org.infinispan.SERVER] ISPN080027: Loaded extension 'iteration-filter-converter-factory'
2023-11-06 08:25:47,280 WARN  (main) [org.infinispan.SERVER] ISPN080059: No script engines are available
2023-11-06 08:25:48,609 INFO  (main) [org.infinispan.CONTAINER] ISPN000556: Starting user marshaller 'org.infinispan.commons.marshall.ImmutableProtoStreamMarshaller'
2023-11-06 08:25:49,022 WARN  (main) [org.infinispan.PERSISTENCE] ISPN000554: jboss-marshalling is deprecated and planned for removal
2023-11-06 08:25:50,697 INFO  (main) [org.infinispan.CLUSTER] ISPN000078: Starting JGroups channel `infinispan` with stack `kubernetes`
2023-11-06 08:25:50,705 INFO  (main) [org.jgroups.JChannel] local_addr: 2ef175c5-249b-4ec5-8e2f-e055c8a47169, name: infinispan-0-38584
2023-11-06 08:25:50,765 INFO  (main) [org.jgroups.protocols.FD_SOCK2] server listening on *.57800
2023-11-06 08:25:52,772 INFO  (main) [org.jgroups.protocols.pbcast.GMS] infinispan-0-38584: no members discovered after 2001 ms: creating cluster as coordinator
2023-11-06 08:25:52,796 INFO  (main) [org.infinispan.CLUSTER] ISPN000094: Received new cluster view for channel infinispan: [infinispan-0-38584|0] (1) [infinispan-0-38584]
2023-11-06 08:25:53,071 INFO  (main) [org.infinispan.CLUSTER] ISPN000079: Channel `infinispan` local address is `infinispan-0-38584`, physical addresses are `[10.2.146.114:7800]`
2023-11-06 08:25:53,090 INFO  (main) [org.infinispan.CONTAINER] ISPN000390: Persisted state, version=14.0.19.Final timestamp=2023-11-06T08:25:53.089106869Z
2023-11-06 08:25:53,715 INFO  (main) [org.jboss.threads] JBoss Threads version 2.3.3.Final
2023-11-06 08:25:54,419 INFO  (main) [org.infinispan.CONTAINER] ISPN000104: Using EmbeddedTransactionManager
2023-11-06 08:25:55,057 WARN  (main) [org.infinispan.SERVER] ISPN080072: JMX remoting enabled without a default security realm. All connections will be rejected.
2023-11-06 08:25:55,096 INFO  (main) [org.infinispan.server.core.telemetry.TelemetryServiceFactory] ISPN000953: OpenTelemetry integration is disabled
2023-11-06 08:25:55,307 INFO  (ForkJoinPool.commonPool-worker-1) [org.infinispan.SERVER] ISPN080018: Started connector HotRod (internal)
2023-11-06 08:25:55,481 INFO  (main) [org.infinispan.SERVER] ISPN080018: Started connector REST (internal)
2023-11-06 08:25:55,515 INFO  (main) [org.infinispan.SERVER] Using transport: Epoll
2023-11-06 08:25:55,724 INFO  (main) [org.infinispan.SERVER] ISPN080004: Connector SinglePort (default) listening on 0.0.0.0:11222
2023-11-06 08:25:55,726 INFO  (main) [org.infinispan.SERVER] ISPN080034: Server 'infinispan-0-38584' listening on http://0.0.0.0:11222
2023-11-06 08:25:55,810 INFO  (main) [org.infinispan.SERVER] ISPN080018: Started connector REST (internal)
2023-11-06 08:25:55,812 INFO  (main) [org.infinispan.SERVER] Using transport: Epoll
2023-11-06 08:25:55,861 INFO  (main) [org.infinispan.SERVER] ISPN080004: Connector SinglePort (metrics) listening on 0.0.0.0:11223
2023-11-06 08:25:55,863 INFO  (main) [org.infinispan.SERVER] ISPN080034: Server 'infinispan-0-38584' listening on http://0.0.0.0:11223
2023-11-06 08:25:55,988 INFO  (main) [org.infinispan.SERVER] ISPN080001: Infinispan Server 14.0.19.Final started in 10452ms
2023-11-06 08:26:22,286 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] ISPN000094: Received new cluster view for channel infinispan: [infinispan-0-38584|1] (2) [infinispan-0-38584, infinispan-1-63294]
2023-11-06 08:26:22,301 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Scope=infinispan-0-38584]ISPN100000: Node infinispan-1-63294 joined the cluster
2023-11-06 08:26:22,324 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Scope=infinispan-0-38584]ISPN100000: Node infinispan-1-63294 joined the cluster
2023-11-06 08:26:23,396 INFO  (jgroups-5,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.PERMISSIONS]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:23,400 INFO  (jgroups-5,infinispan-0-38584) [org.infinispan.LIFECYCLE] [Context=org.infinispan.PERMISSIONS][Scope=infinispan-0-38584]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:23,414 INFO  (non-blocking-thread--p2-t1) [org.infinispan.LIFECYCLE] [Context=org.infinispan.PERMISSIONS][Scope=infinispan-0-38584]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 2
2023-11-06 08:26:23,675 INFO  (jgroups-5,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.ROLES]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:23,679 INFO  (jgroups-5,infinispan-0-38584) [org.infinispan.LIFECYCLE] [Context=org.infinispan.ROLES][Scope=infinispan-0-38584]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:23,682 INFO  (non-blocking-thread--p2-t2) [org.infinispan.LIFECYCLE] [Context=org.infinispan.ROLES][Scope=infinispan-0-38584]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 2
2023-11-06 08:26:23,803 INFO  (jgroups-5,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.PERMISSIONS]ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3
2023-11-06 08:26:23,858 INFO  (jgroups-5,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.PERMISSIONS]ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4
2023-11-06 08:26:23,874 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.CLIENT_SERVER_TX_TABLE]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:23,876 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.LIFECYCLE] [Context=org.infinispan.CLIENT_SERVER_TX_TABLE][Scope=infinispan-0-38584]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:23,878 INFO  (non-blocking-thread--p2-t1) [org.infinispan.LIFECYCLE] [Context=org.infinispan.CLIENT_SERVER_TX_TABLE][Scope=infinispan-0-38584]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 2
2023-11-06 08:26:23,933 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.PERMISSIONS]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 5
2023-11-06 08:26:23,934 INFO  (jgroups-5,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.ROLES]ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3
2023-11-06 08:26:23,942 INFO  (jgroups-5,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.ROLES]ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4
2023-11-06 08:26:23,950 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.ROLES]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 5
2023-11-06 08:26:23,952 INFO  (jgroups-5,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.CLIENT_SERVER_TX_TABLE]ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3
2023-11-06 08:26:23,958 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.CLIENT_SERVER_TX_TABLE]ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4
2023-11-06 08:26:24,005 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.CLIENT_SERVER_TX_TABLE]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 5
2023-11-06 08:26:24,076 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=___protobuf_metadata]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:24,079 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.LIFECYCLE] [Context=___protobuf_metadata][Scope=infinispan-0-38584]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:24,084 INFO  (non-blocking-thread--p2-t2) [org.infinispan.LIFECYCLE] [Context=___protobuf_metadata][Scope=infinispan-0-38584]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 2
2023-11-06 08:26:24,144 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.CONFIG]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:24,151 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.LIFECYCLE] [Context=org.infinispan.CONFIG][Scope=infinispan-0-38584]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:24,158 INFO  (non-blocking-thread--p2-t1) [org.infinispan.LIFECYCLE] [Context=org.infinispan.CONFIG][Scope=infinispan-0-38584]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 2
2023-11-06 08:26:24,185 INFO  (jgroups-5,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.CONFIG]ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3
2023-11-06 08:26:24,258 INFO  (jgroups-5,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.CONFIG]ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4
2023-11-06 08:26:24,266 INFO  (jgroups-5,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.CONFIG]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 5
2023-11-06 08:26:24,270 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=___protobuf_metadata]ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3
2023-11-06 08:26:24,284 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=___protobuf_metadata]ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4
2023-11-06 08:26:24,296 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.COUNTER]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:24,302 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.LIFECYCLE] [Context=org.infinispan.COUNTER][Scope=infinispan-0-38584]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:24,309 INFO  (non-blocking-thread--p2-t2) [org.infinispan.LIFECYCLE] [Context=org.infinispan.COUNTER][Scope=infinispan-0-38584]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 2
2023-11-06 08:26:24,332 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=___protobuf_metadata]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 5
2023-11-06 08:26:24,436 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.COUNTER]ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3
2023-11-06 08:26:24,448 INFO  (jgroups-5,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.COUNTER]ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4
2023-11-06 08:26:24,476 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.COUNTER]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 5
2023-11-06 08:26:24,508 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=___script_cache]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:24,518 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.LIFECYCLE] [Context=___script_cache][Scope=infinispan-0-38584]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:24,532 INFO  (non-blocking-thread--p2-t1) [org.infinispan.LIFECYCLE] [Context=___script_cache][Scope=infinispan-0-38584]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 2
2023-11-06 08:26:24,602 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=___script_cache]ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3
2023-11-06 08:26:24,649 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=___script_cache]ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4
2023-11-06 08:26:24,654 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=___script_cache]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 5
2023-11-06 08:26:24,685 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.LOCKS]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:24,686 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.LIFECYCLE] [Context=org.infinispan.LOCKS][Scope=infinispan-0-38584]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:24,688 INFO  (non-blocking-thread--p2-t1) [org.infinispan.LIFECYCLE] [Context=org.infinispan.LOCKS][Scope=infinispan-0-38584]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 2
2023-11-06 08:26:24,752 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.LOCKS]ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3
2023-11-06 08:26:24,760 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.LOCKS]ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4
2023-11-06 08:26:24,765 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=org.infinispan.LOCKS]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 5
2023-11-06 08:26:25,266 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=___hotRodTopologyCache_hotrod-default]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:25,268 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.LIFECYCLE] [Context=___hotRodTopologyCache_hotrod-default][Scope=infinispan-0-38584]ISPN100002: Starting rebalance with members [infinispan-0-38584, infinispan-1-63294], phase READ_OLD_WRITE_ALL, topology id 2
2023-11-06 08:26:25,270 INFO  (non-blocking-thread--p2-t2) [org.infinispan.LIFECYCLE] [Context=___hotRodTopologyCache_hotrod-default][Scope=infinispan-0-38584]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 2
2023-11-06 08:26:25,379 INFO  (jgroups-6,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=___hotRodTopologyCache_hotrod-default]ISPN100009: Advancing to rebalance phase READ_ALL_WRITE_ALL, topology id 3
2023-11-06 08:26:25,387 INFO  (jgroups-5,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=___hotRodTopologyCache_hotrod-default]ISPN100009: Advancing to rebalance phase READ_NEW_WRITE_ALL, topology id 4
2023-11-06 08:26:25,393 INFO  (jgroups-5,infinispan-0-38584) [org.infinispan.CLUSTER] [Context=___hotRodTopologyCache_hotrod-default]ISPN100010: Finished rebalance with members [infinispan-0-38584, infinispan-1-63294], topology id 5
oburd commented 8 months ago

And errors from browser errors

oburd commented 8 months ago

@ryanemerson Hello! Can you tell me what is this an error ? Thank you

tristantarrant commented 8 months ago

All those 403 indicate a wrong authentication.

oburd commented 8 months ago

It's happened after authentication I can show you @tristantarrant if you need this

oburd commented 8 months ago

@tristantarrant error2 There is an auth by default user developer same problems

oburd commented 7 months ago

After i logged to console i see next error in Network Inspect

https://dns.name/console/fonts/RedHatText-Regular.woff
Request Method:
GET
Status Code:
403 Forbidden
Remote Address:
10.2.140.97:443
Referrer Policy:
strict-origin-when-cross-origin
Content-Length:
0
Date:
Mon, 06 Nov 2023 11:29:23 GMT
:authority:
dns.name
:method:
GET
:path:
/console/fonts/RedHatText-Regular.woff
:scheme:
https
Accept:
*/*
Accept-Encoding:
gzip, deflate, br
Accept-Language:
en-US,en;q=0.9
Cookie:
locale=en-US
Origin:
https://dns.name
Referer:
https://dns.name/console/app.css
Sec-Ch-Ua:
"Google Chrome";v="119", "Chromium";v="119", "Not?A_Brand";v="24"
Sec-Ch-Ua-Mobile:
?0
Sec-Ch-Ua-Platform:
"Windows"
Sec-Fetch-Dest:
font
Sec-Fetch-Mode:
cors
Sec-Fetch-Site:
same-origin
User-Agent:
Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/119.0.0.0 Safari/537.36
oburd commented 7 months ago

Problem with console as i see So any suggestions ? Because it's very bad One question: should i change something ?


  infinispan:
    cacheContainer:
      # [USER] Add cache, template, and counter configuration.
      name: default
      # [USER] Specify `security: null` to disable security authorization.
      security:
        authorization: {}
ryanemerson commented 7 months ago

@oburd Are you providing an authentication secret via deploy.security.secretName or are you using the users that are auto-generated?

oburd commented 7 months ago

@ryanemerson both 1) one time it's default 2) i added to deploy.security.batch this one "user create ${local.infinispan.infinispan_admin_user} -p ${local.infinispan.infinispan_admin_password} -g admin" issue for both variants

ryanemerson commented 7 months ago

Hmm you're correctly adding the -g admin in step 2, so I would expect that to work.

Can you try disabling authorization by setting infinispan.cacheContainer.security.authorization: null and see if the problem persists?

oburd commented 7 months ago

Hmm you're correctly adding the -g admin in step 2, so I would expect that to work.

Can you try disabling authorization by setting infinispan.cacheContainer.security.authorization: null and see if the problem persists?

Okay i will try, but it's very strange for me I did right like it wrote in instruction :) I tried to use infinispan docker image with tag 12.1, but received error Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "--cluster-name=infinispan": executable file not found in $PATH: unknown :D

ryanemerson commented 7 months ago

I tried to use infinispan docker image with tag 12.1, but received error

The helm chart is only tested and supported with Infinispan 14.0.x

oburd commented 7 months ago

I tried to use infinispan docker image with tag 12.1, but received error

The helm chart is only tested and supported with Infinispan 14.0.x

okay, thank you. After deploy i'm back with result

oburd commented 7 months ago

@ryanemerson so there is a result: 1) infinispan.cacheContainer.security.authorization: null nothing changed error the same 2)

infinispan:
    cacheContainer:
      # [USER] Add cache, template, and counter configuration.
      name: default
      # [USER] Specify `security: null` to disable security authorization.
      security: null

Little better but errors same and as i thought that i dont need to authentication but i need to enter login and password There is a screenshot error

ryanemerson commented 7 months ago

Little better but errors same and as i thought that i dont need to authentication but i need to enter login and password

Setting infinispan.cacheContainer.security.authorization: null disables authorization, not authentication, which is why you still need to login.

To disable authentication entirely you can comment the following lines https://github.com/infinispan/infinispan-helm-charts/blob/main/values.yaml#L136-L141

oburd commented 7 months ago

Little better but errors same and as i thought that i dont need to authentication but i need to enter login and password

Setting infinispan.cacheContainer.security.authorization: null disables authorization, not authentication, which is why you still need to login.

To disable authentication entirely you can comment the following lines https://github.com/infinispan/infinispan-helm-charts/blob/main/values.yaml#L136-L141

@ryanemerson okay, i understood, but error still present which i have described upper

Question: what this line doing ? infinispan.cacheContainer.security.authorization: {} because i didn't find anything about it

ryanemerson commented 7 months ago

@ryanemerson okay, i understood, but error still present which i have described upper

If you're still getting 403 errors with both Authorization and Authentication disabled, then I think this is an issue with your Ingress configuration.

what this line doing ? infinispan.cacheContainer.security.authorization: {} because i didn't find anything about it

This is the Authorization configuration for Infinispan, by providing an empty object {} you're configuring the default behaviour. You can find more information about this in the Infinispan Security Guide.

oburd commented 7 months ago

@ryanemerson okay, i understood, but error still present which i have described upper

If you're still getting 403 errors with both Authorization and Authentication disabled, then I think this is an issue with your Ingress configuration.

I did what you wrote infinispan.cacheContainer.security.authorization: null nothing changed i turn off

infinispan:
cacheContainer:
# [USER] Add cache, template, and counter configuration.
name: default
# [USER] Specify `security: null` to disable security authorization.
security: null

nothing changed

About Ingress I take default part of your and add only annotation that i can use AWS ALB If the problem with ALB i can't enter at all on console page

i did same Ingress configuration for different services and they work fine There are my annotations

annotations:
      - key: kubernetes.io/ingress.class
        value: alb
      - key: alb.ingress.kubernetes.io/group.name
        value: dummy
      - key: alb.ingress.kubernetes.io/group.order
        value: dummy
      - key: alb.ingress.kubernetes.io/scheme
        value: internal
      - key: alb.ingress.kubernetes.io/target-type
        value: ip
      - key: alb.ingress.kubernetes.io/listen-ports
        value: '[{"HTTP": 80}, {"HTTPS":443}]'
      - key: alb.ingress.kubernetes.io/certificate-arn
        value: dummy
      - key: alb.ingress.kubernetes.io/ssl-redirect
        value: '443'
      - key: alb.ingress.kubernetes.io/healthcheck-path
        value: /rest/v2/cache-managers/default/health/status
ryanemerson commented 7 months ago

To disable authentication entirely you can comment the following lines https://github.com/infinispan/infinispan-helm-charts/blob/main/values.yaml#L136-L141

Did you do this as well?

oburd commented 7 months ago

@ryanemerson good news i have done: infinispan.cacheContainer.security.authorization: null and turned off

          #propertiesRealm:
            #groupProperties:
              #path: groups.properties
            #groupsAttribute: Roles
            #userProperties:
              #path: users.properties

And it's work most better But it's not good that we dont have authorization and authentication error, but work There are some errors still presents 403 errors-still-presents

oburd commented 7 months ago

@ryanemerson I dont understand only one thing, why the default dont want to to correct work with your configuration Because as i understand authentication is pass, but authorization isn't work well Maybe something i need to add to my file?

# Default values for infinispan-helm-charts.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

images:
  # [USER] The container images for server pods.
  server: quay.io/infinispan/server:14.0
  initContainer: registry.access.redhat.com/ubi8-micro

deploy:
  # [USER] Specify the number of nodes in the cluster.
  replicas: 2

  clusterDomain: cluster.local

  container:
    extraJvmOpts: ""
    libraries: ""
    # [USER] Define custom environment variables using standard K8s format
    # env:
    #  - name: STANDARD_KEY
    #    value: standard value
    #  - name: CONFIG_MAP_KEY
    #    valueFrom:
    #      configMapKeyRef:
    #        name: special-config
    #        key: special.how
    #  - name: SECRET_KEY
    #    valueFrom:
    #      secretKeyRef:
    #        name: special-secret
    #        key: special.how
    env:
    storage:
      size: 1Gi
      storageClassName: ""
      # [USER] Set `ephemeral: true` to delete all persisted data when clusters shut down or restart.
      ephemeral: true
    resources:
      # [USER] Specify the CPU limit and the memory limit for each pod.
      limits:
        cpu: 1000m
        memory: 1024Mi
      # [USER] Specify the maximum CPU requests and the maximum memory requests for each pod.
      requests:
        cpu: 1000m
        memory: 1024Mi

  security:
    secretName: ""
    batch: ""

  expose:
    # [USER] Specify `type: ""` to disable network access to clusters.
    type: Route
    nodePort: 0
    host: dummy
    annotations:
      - key: kubernetes.io/ingress.class
        value: alb
      - key: alb.ingress.kubernetes.io/group.name
        value: dummy
      - key: alb.ingress.kubernetes.io/group.order
        value: '17'
      - key: alb.ingress.kubernetes.io/scheme
        value: internal
      - key: alb.ingress.kubernetes.io/target-type
        value: ip
      - key: alb.ingress.kubernetes.io/listen-ports
        value: '[{"HTTP": 80}, {"HTTPS":443}]'
      - key: alb.ingress.kubernetes.io/certificate-arn
        value: dummy
      - key: alb.ingress.kubernetes.io/ssl-redirect
        value: '443'
      - key: alb.ingress.kubernetes.io/healthcheck-path
        value: /rest/v2/cache-managers/default/health/status

  monitoring:
    enabled: false

  logging:
    categories:
      # [USER] Specify the FQN of a package from which you want to collect logs.
      - category: com.arjuna
        # [USER] Specify the level of log messages.
        level: warn
      # No need to warn about not being able to TLS/SSL handshake
      - category: io.netty.handler.ssl.ApplicationProtocolNegotiationHandler
        level: error

  makeDataDirWritable: false

  nameOverride: ""

  resourceLabels: []

  podLabels:
    - key: microservice
      value: infinispan

  svcLabels: []

  tolerations: []

  nodeAffinity: {}

  nodeSelector: {}

  infinispan:
    cacheContainer:
      # [USER] Add cache, template, and counter configuration.
      name: default
      # [USER] Specify `security: null` to disable security authorization.
      security:
        authorization: {}
      transport:
        cluster: ${infinispan.cluster.name:cluster}
        node-name: ${infinispan.node.name:}
        stack: kubernetes
    server:
      endpoints:
      # [USER] Hot Rod and REST endpoints.
      - securityRealm: default
        socketBinding: default
        connectors:
          rest:
            restConnector:
          hotrod:
            hotrodConnector:
          # [MEMCACHED] Uncomment to enable Memcached endpoint
          # memcached:
          #   memcachedConnector:
          #     socketBinding: memcached
      # [METRICS] Metrics endpoint for cluster monitoring capabilities.
      - connectors:
          rest:
            restConnector:
              authentication:
                mechanisms: BASIC
        securityRealm: metrics
        socketBinding: metrics
      interfaces:
      - inetAddress:
          value: ${infinispan.bind.address:127.0.0.1}
        name: public
      security:
        credentialStores:
        - clearTextCredential:
            clearText: secret
          name: credentials
          path: credentials.pfx
        securityRealms:
        # [USER] Security realm for the Hot Rod and REST endpoints.
        - name: default
          # [USER] Comment or remove this properties realm to disable authentication.
          propertiesRealm:
            groupProperties:
              path: groups.properties
            groupsAttribute: Roles
              userProperties:
                path: users.properties
          # [METRICS] Security realm for the metrics endpoint.
        - name: metrics
          propertiesRealm:
            groupProperties:
              path: metrics-groups.properties
              relativeTo: infinispan.server.config.path
            groupsAttribute: Roles
            userProperties:
              path: metrics-users.properties
              relativeTo: infinispan.server.config.path
      socketBindings:
        defaultInterface: public
        portOffset: ${infinispan.socket.binding.port-offset:0}
        socketBinding:
          # [USER] Socket binding for the Hot Rod and REST endpoints.
        - name: default
          port: 11222
          # [METRICS] Socket binding for the metrics endpoint.
        - name: metrics
          port: 11223
          # [MEMCACHED] Uncomment to enable Memcached endpoint
        # - name: memcached
        #   port: 11221
oburd commented 7 months ago

@ryanemerson Hello! There are my 2 posts, can you please check ? Thank you

ryanemerson commented 7 months ago

We think the issue might be caused by DIGEST authentication. If the Loadbalancer is using a round robin policy and sticky sessions are not enabled, then the challenge/response of the DIGEST protocol will be sent to different Infinispan pods. There are two workarounds for this:

  1. Enable sticky sessions on the loadbalancer with the following annotation: alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true
  2. Disable the DIGEST protocol on the Infinispan rest server, to do this update your values to:
    server:
      endpoints:
      # [USER] Hot Rod and REST endpoints.
      - securityRealm: default
        socketBinding: default
        connectors:
          rest:
            restConnector:
              authentication:
                mechanisms: BASIC
          hotrod:
            hotrodConnector:
          # [MEMCACHED] Uncomment to enable Memcached endpoint
          # memcached:
          #   memcachedConnector:
          #     socketBinding: memcached
      # [METRICS] Metrics endpoint for cluster monitoring capabilities.
      - connectors:
          rest:
            restConnector:
              authentication:
                mechanisms: BASIC
oburd commented 7 months ago

@ryanemerson nice, thank you for help, i will check it and back with results

oburd commented 7 months ago

@ryanemerson
Hello! Thank you for help It's work with BASIC auth