bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.66k stars 9.01k forks source link

Issue with starting kafka along with istio sidecar injection #6144

Closed avanichy25 closed 3 years ago

avanichy25 commented 3 years ago

Hello i am trying to inject istio-sidecar with my services but kafka deployed with bitnami image is restarting with the following error :

Error:

[2021-04-19 02:32:04,257] INFO Unable to read additional data from server sessionid 0x0, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2021-04-19 02:32:05,633] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient)
[2021-04-19 02:32:06,332] INFO Opening socket connection to server kafka-zookeeper/10.0.220.69:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2021-04-19 02:32:06,333] INFO Socket connection established, initiating session, client: /10.244.9.77:52870, server: kafka-zookeeper/10.0.220.69:2181 (org.apache.zookeeper.ClientCnxn)
[2021-04-19 02:32:06,439] INFO Session: 0x0 closed (org.apache.zookeeper.ZooKeeper)
[2021-04-19 02:32:06,439] INFO EventThread shut down for session: 0x0 (org.apache.zookeeper.ClientCnxn)
[2021-04-19 02:32:06,441] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)
[2021-04-19 02:32:06,444] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.zookeeper.ZooKeeperClientTimeoutException: Timed out waiting for connection while in state: CONNECTING
        at kafka.zookeeper.ZooKeeperClient.$anonfun$waitUntilConnected$3(ZooKeeperClient.scala:262)
        at kafka.zookeeper.ZooKeeperClient.waitUntilConnected(ZooKeeperClient.scala:258)
        at kafka.zookeeper.ZooKeeperClient.<init>(ZooKeeperClient.scala:119)
        at kafka.zk.KafkaZkClient$.apply(KafkaZkClient.scala:1863)
        at kafka.server.KafkaServer.createZkClient$1(KafkaServer.scala:378)
        at kafka.server.KafkaServer.initZkClient(KafkaServer.scala:403)
        at kafka.server.KafkaServer.startup(KafkaServer.scala:210)
        at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
        at kafka.Kafka$.main(Kafka.scala:82)
        at kafka.Kafka.main(Kafka.scala)

I am using istio charts istio-1.9.2 . How i can deploy kafka with istio-sidecar please provide some information.

Thanks !

alvneiayu commented 3 years ago

hi @avanichy25

Sorry, our products has not officially support for Istio. We are trying to adapt our products to work properly with Istio but right now, we are not testing it.

Sorry again and I hope that I have helped you.

Thanks

Álvaro

avanichy25 commented 3 years ago

Hi

Sure. can you provide me some information so i can do testing on my side.

Thanks Avani

alvneiayu commented 3 years ago

hi @avanichy25

Could you share with me the values that you are using, please?

Thanks a lot

Álvaro

avanichy25 commented 3 years ago

Hi i using tls certs for client security and also loadbalancer. I have added the values file below:

## Global Docker image parameters
## Please, note that this will override the image parameters, including dependencies, configured to use the global value
## Current available global Docker image parameters: imageRegistry and imagePullSecrets
##
global:
  imageRegistry: docker.io
  imagePullSecrets:
    - xxxx
#   storageClass: myStorageClass

## Bitnami Kafka image version
## ref: https://hub.docker.com/r/bitnami/kafka/tags/
##
image:
  registry: docker.io
  repository: bitnami/kafka
  tag: 2.5.0
  ## Specify a imagePullPolicy
  ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
  ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
  ##
  pullPolicy: IfNotPresent
  ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
  ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
  ## Example:
  ## pullSecrets:
  ##   - myRegistryKeySecretName
  ##
  pullSecrets: []

  ## Set to true if you would like to see extra information on logs
  ##
  debug: false

## String to partially override kafka.fullname template (will maintain the release name)
##
# nameOverride:

## String to fully override kafka.fullname template
##
# fullnameOverride:

## Kubernetes Cluster Domain
##
clusterDomain: cluster.local

## Add labels to all the deployed resources
##
commonLabels: {}

## Add annotations to all the deployed resources
##
commonAnnotations: {}

## Kafka Configuration
## Specify content for server.properties
## The server.properties is auto-generated based on other parameters when this paremeter is not specified
##
## Example:
# config: |-
#   broker.id=-1
#   listeners=PLAINTEXT://:9092
#   advertised.listeners=PLAINTEXT://KAFKA_IP:9092
##   num.network.threads=3
##   num.io.threads=8
##   socket.send.buffer.bytes=102400
##   socket.receive.buffer.bytes=102400
##   socket.request.max.bytes=104857600
##   log.dirs=/bitnami/kafka/data
##   num.partitions=1
##   num.recovery.threads.per.data.dir=1
##   offsets.topic.replication.factor=1
##   transaction.state.log.replication.factor=1
##   transaction.state.log.min.isr=1
##   log.flush.interval.messages=10000
##   log.flush.interval.ms=1000
##   log.retention.hours=168
##   log.retention.bytes=1073741824
##   log.segment.bytes=1073741824
##   log.retention.check.interval.ms=300000
##   zookeeper.connect=ZOOKEEPER_SERVICE_NAME
##   zookeeper.connection.timeout.ms=6000
##   group.initial.rebalance.delay.ms=0
##
# config:

## ConfigMap with Kafka Configuration
## NOTE: This will override config
##
# existingConfigmap:

## Kafka Log4J Configuration
## An optional log4j.properties file to overwrite the default of the Kafka brokers.
## See an example log4j.properties at:
## https://github.com/apache/kafka/blob/trunk/config/log4j.properties
##
# log4j:

## Kafka Log4j ConfigMap
## The name of an existing ConfigMap containing a log4j.properties file.
## NOTE: this will override log4j.
##
# existingLog4jConfigMap:

## Kafka's Java Heap size
##
heapOpts: -Xmx16g -Xms16g

## Switch to enable topic deletion or not.
##
deleteTopicEnable: false

## Switch to enable auto creation of topics.
## Enabling auto creation of topics not recommended for production or similar environments.
##
autoCreateTopicsEnable: true

## The number of messages to accept before forcing a flush of data to disk.
##
logFlushIntervalMessages: 10000

## The maximum amount of time a message can sit in a log before we force a flush.
##
logFlushIntervalMs: 1000

## A size-based retention policy for logs.
##
logRetentionBytes: _500000000

## The interval at which log segments are checked to see if they can be deleted.
##
logRetentionCheckIntervalMs: 300000

## The minimum age of a log file to be eligible for deletion due to age.
##
logRetentionHours: 168

## The maximum size of a log segment file. When this size is reached a new log segment will be created.
##
logSegmentBytes: _250000000

## A comma separated list of directories under which to store log files.
##
logsDirs: /bitnami/kafka/data

## The largest record batch size allowed by Kafka
##
maxMessageBytes: _1000012

## Default replication factors for automatically created topics
##
defaultReplicationFactor: 3

## The replication factor for the offsets topic
##
offsetsTopicReplicationFactor: 3

## The replication factor for the transaction topic
##
transactionStateLogReplicationFactor: 3

## Overridden min.insync.replicas config for the transaction topic
##

transactionStateLogMinIsr: 3

## The number of threads doing disk I/O.
##
numIoThreads: 8

## The number of threads handling network requests.
##
numNetworkThreads: 3

## The default number of log partitions per topic.
##
numPartitions: 1

## The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
##
numRecoveryThreadsPerDataDir: 1

## The receive buffer (SO_RCVBUF) used by the socket server.
##
socketReceiveBufferBytes: 102400

## The maximum size of a request that the socket server will accept (protection against OOM).
##
socketRequestMaxBytes: _1048576000

## The send buffer (SO_SNDBUF) used by the socket server.
##
socketSendBufferBytes: 102400

## Timeout in ms for connecting to zookeeper.
##
zookeeperConnectionTimeoutMs: 6000

## Command and args for running the container. Use array form
##
command:
  - /scripts/setup.sh
args:

## All the parameters from the configuration file can be overwritten by using environment variables with this format: KAFKA_CFG_{KEY}
## ref: https://github.com/bitnami/bitnami-docker-kafka#configuration
## Example:
## extraEnvVars:
##   - name: KAFKA_CFG_BACKGROUND_THREADS
##     value: "10"
##
extraEnvVars: []

## extraVolumes and extraVolumeMounts allows you to mount other volumes
## Examples:
# extraVolumes:
#   - name: kafka-jaas
#     secret:
#       secretName: kafka-jaas
# extraVolumeMounts:
#   - name: kafka-jaas
#     mountPath: /bitnami/kafka/config/kafka_jaas.conf
#     subPath: kafka_jaas.conf
extraVolumes:
  - name: kafka-app-logs
    emptyDir: {}
  - name: fluentbit-conf
    configMap:
      name: fluentbit-conf
      defaultMode: 420
  # - name: scripts
  #   configMap:
  #     name: kafka-scripts
  #     defaultMode: 0755
extraVolumeMounts: {}
  # - name: scripts
  #   mountPath: /scripts
  # - name: kafka-fluent-bit-conf
  #   mountPath: /fluent-bit/etc

## Extra objects to deploy (value evaluated as a template)
##
extraDeploy: []

## Authentication parameteres
## https://github.com/bitnami/bitnami-docker-kafka#security
##
auth:
  ## Authentication protocol for client and inter-broker communications
  ## Supported values: 'plaintext', 'tls', 'mtls', 'sasl' and 'sasl_tls'
  ## This table shows the security provided on each protocol:
  ## | Method    | Authentication                | Encryption via TLS |
  ## | plaintext | None                          | No                 |
  ## | tls       | None                          | Yes                |
  ## | mtls      | Yes (two-way authentication)  | Yes                |
  ## | sasl      | Yes (via SASL)                | No                 |
  ## | sasl_tls  | Yes (via SASL)                | Yes                |
  ##
  clientProtocol: tls
  interBrokerProtocol: plaintext

  ## Allowed SASL mechanisms when clientProtocol or interBrokerProtocol are using either sasl or sasl_tls
  ##
  saslMechanisms: plain,scram-sha-256,scram-sha-512
  ## SASL mechanism for inter broker communication
  ##
  saslInterBrokerMechanism: plain
  ## Name of the existing secret containing the truststore and
  ## one keystore per Kafka broker you have in the Kafka cluster.
  ## MANDATORY when 'tls', 'mtls', or 'sasl_tls' authentication protocols are used.
  ## Create this secret following the steps below:
  ## 1) Generate your trustore and keystore files. Helpful script: https://raw.githubusercontent.com/confluentinc/confluent-platform-security-tools/master/kafka-generate-ssl.sh
  ## 2) Rename your truststore to `kafka.truststore.jks`.
  ## 3) Rename your keystores to `kafka-X.keystore.jks` where X is the ID of each Kafka broker.
  ## 4) Run the command below where SECRET_NAME is the name of the secret you want to create:
  ##       kubectl create secret generic SECRET_NAME --from-file=./kafka.truststore.jks --from-file=./kafka-0.keystore.jks --from-file=./kafka-1.keystore.jks ...
  ## Alternatively, you can put your JKS files under the files/jks directory
  ##
  jksSecret: kafka-jks

  ## Password to access the JKS files when they are password-protected.
  ##
  jksPassword: xxxxx

  ## The endpoint identification algorithm used by clients to validate server host name.
  ## Disable server host name verification by setting it to an empty string
  ## See: https://docs.confluent.io/current/kafka/authentication_ssl.html#optional-settings
  ##
  tlsEndpointIdentificationAlgorithm: ""

  ## JAAS configuration for SASL authentication
  ## MANDATORY when method is 'sasl', or 'sasl_tls'
  ##
  jaas:
    ## Kafka client user list
    ##
    ## clientUsers:
    ##   - user1
    ##   - user2
    ##
    clientUsers:
      - user

    ## Kafka client passwords. This is mandatory if more than one user is specified in clientUsers.
    ##
    ## clientPasswords:
    ##   - password1
    ##   - password2"
    ##
    clientPasswords: []

    ## Kafka inter broker communication user
    ##
    interBrokerUser: admin

    ## Kafka inter broker communication password
    ##
    interBrokerPassword: ""

    ## Kafka Zookeeper user
    ##
    # zookeeperUser:

    ## Kafka Zookeeper password
    ##
    # zookeeperPassword:

    ## Name of the existing secret containing credentials for clientUsers, interBrokerUser and zookeeperUser.
    ## Create this secret running the command below where SECRET_NAME is the name of the secret you want to create:
    ##       kubectl create secret generic SECRET_NAME --from-literal=client-passwords=CLIENT_PASSWORD1,CLIENT_PASSWORD2 --from-literal=inter-broker-password=INTER_BROKER_PASSWORD --from-literal=zookeeper-password=ZOOKEEPER_PASSWORD
    ##
    # existingSecret:

## The address(es) the socket server listens on.
## When it's set to an empty array, the listeners will be configured
## based on the authentication protocols (auth.clientProtocol and auth.interBrokerProtocol parameters)
##
listeners: []

## The address(es) (hostname:port) the brokers will advertise to producers and consumers.
## When it's set to an empty array, the advertised listeners will be configured
## based on the authentication protocols (auth.clientProtocol and auth.interBrokerProtocol parameters)
##
advertisedListeners: []

## The listener->protocol mapping
## When it's nil, the listeners will be configured
## based on the authentication protocols (auth.clientProtocol and auth.interBrokerProtocol parameters)
##
# listenerSecurityProtocolMap: PLAINTEXT:PLAINTEXT,SSL:SSL

## Allow to use the PLAINTEXT listener.
##
allowPlaintextListener: true

## Name of listener used for communication between brokers.
##
interBrokerListenerName: INTERNAL

## Number of Kafka brokers to deploy
##
replicaCount: 3

## StrategyType, can be set to RollingUpdate or OnDelete by default.
## ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#updating-statefulsets
##
updateStrategy: RollingUpdate

## Partition update strategy
## https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#partitions
##
# rollingUpdatePartition:

## Pod labels. Evaluated as a template
## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/
##
podLabels: {}

## Pod annotations. Evaluated as a template
## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/
##
podAnnotations: {}

## Name of the priority class to be used by kafka pods, priority class needs to be created beforehand
## Ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/
##
priorityClassName: ""

## Affinity for pod assignment. Evaluated as a template
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
##
affinity: {}

## Node labels for pod assignment. Evaluated as a template
## ref: https://kubernetes.io/docs/user-guide/node-selection/
##
nodeSelector: {}

## Tolerations for pod assignment. Evaluated as a template
## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
##
tolerations: []

## Kafka pods' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod
##
podSecurityContext:
  fsGroup: 1001
  runAsUser: 1001

## Kafka containers' Security Context
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container
## Example:
##   containerSecurityContext:
##     capabilities:
##       drop: ["NET_RAW"]
##     readOnlyRootFilesystem: true
##
containerSecurityContext: {}

## Kafka containers' resource requests and limits
## ref: http://kubernetes.io/docs/user-guide/compute-resources/
##
resources:
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  limits:
    cpu: 300m
    memory: 32Gi
  requests:
    cpu: 300m
    memory: 32Gi

## Kafka containers' liveness and readiness probes. Evaluated as a template.
## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes
##
livenessProbe:
  enabled: true
  tcpSocket:
    port: kafka-plain
  initialDelaySeconds: 20
  timeoutSeconds: 5
  failureThreshold: 5
  periodSeconds: 10
  # successThreshold: 1
readinessProbe:
  enabled: true
  exec:
    command:
      - /bin/bash
      - /scripts/readiness.sh
  initialDelaySeconds: 20
  failureThreshold: 5
  timeoutSeconds: 10
  periodSeconds: 10
  # successThreshold: 1

## Custom liveness/readiness probes that will override the default ones
##
customLivenessProbe: {}
customReadinessProbe: {}

## Pod Disruption Budget configuration
## The PDB will only be created if replicaCount is greater than 1
## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions
##
pdb:
  create: true
  ## Min number of pods that must still be available after the eviction
  ##
  # minAvailable: 1
  ## Max number of pods that can be unavailable after the eviction
  ##
  maxUnavailable: 1

## Add sidecars to the pod.
#  Example:
## sidecars:
##   - name: your-image-name
##     image: your-image
##     imagePullPolicy: Always
##     ports:
##       - name: portname
##         containerPort: 1234
##
appconfigmap:
  configmapnameeslog: elasticsearch
  elasticsearchhost: elasticsearch
  elasticsearchport: 9200
  elasticsearchurl: http://elasticsearch:9200

fluentbitconfigmap:
  name: kafka-fluent-bit-conf
sidecars:
  - name: fluent-bit-sidecar
    image: docker.io/fluent/fluent-bit:1.4.4-debug
    imagePullPolicy: Always
    envFrom:
    - configMapRef:
        name: elasticsearch
    - configMapRef:
        name: fluentbit-config
    resources:
      requests:
        cpu: 5m
        memory: 10Mi
      limits:
        cpu: 50m
        memory: 60Mi
    volumeMounts:
    - name: kafka-app-logs
      readOnly: true
      mountPath: /mnt/logs
    - name: fluentbit-config
      mountPath: /fluent-bit/etc
## Service parameters
##
service:
  ## Service type
  ##
  type: ClusterIP
  ## Kafka port for client connections
  ##
  port: 9092
  ## Kafka port for inter-broker connections
  ##
  internalPort: 9093
  ## Kafka port for external connections
  ##
  externalPort: 9094
  ## Specify the nodePort value for the LoadBalancer and NodePort service types.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
  ##
  nodePorts:
    client: ""
    external: ""
  ## Set the LoadBalancer service type to internal only.
  ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
  ##
  # loadBalancerIP:
  ## Load Balancer sources
  ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
  ## Example:
  ## loadBalancerSourceRanges:
  ## - 10.10.10.0/24
  ##
  loadBalancerSourceRanges: []
  ## Provide any additional annotations which may be required. Evaluated as a template
  ##
  annotations: {}

## External Access to Kafka brokers configuration
##
externalAccess:
  ## Enable Kubernetes external cluster access to Kafka brokers
  ##
  enabled: true

  ## External IPs auto-discovery configuration
  ## An init container is used to auto-detect LB IPs or node ports by querying the K8s API
  ## Note: RBAC might be required
  ##
  autoDiscovery:
    ## Enable external IP/ports auto-discovery
    ##
    enabled: true
    ## Bitnami Kubectl image
    ## ref: https://hub.docker.com/r/bitnami/kubectl/tags/
    ##
    image:
      registry: docker.io
      repository: bitnami/kubectl
      tag: 1.17.9-debian-10-r0
      ## Specify a imagePullPolicy
      ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
      ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
      ##
      pullPolicy: IfNotPresent
      ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
      ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
      ## Example:
      ## pullSecrets:
      ##   - myRegistryKeySecretName
      ##
      pullSecrets: []
    ## Init Container resource requests and limits
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
    resources:
      # We usually recommend not to specify default resources and to leave this as a conscious
      # choice for the user. This also increases chances charts run on environments with little
      # resources, such as Minikube. If you do want to specify resources, uncomment the following
      # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
      limits: {}
      #   cpu: 100m
      #   memory: 128Mi
      requests: {}
      #   cpu: 100m
      #   memory: 128Mi

  ## Parameters to configure K8s service(s) used to externally access Kafka brokers
  ## A new service per broker will be created
  ##
  service:
    ## Service type. Allowed values: LoadBalancer or NodePort
    ##
    type: LoadBalancer
    ## Port used when service type is LoadBalancer
    ##
    port: 9094
    ## Array of load balancer IPs for each Kafka broker. Length must be the same as replicaCount
    ## Example:
    ## loadBalancerIPs:
    ##   - X.X.X.X
    ##   - Y.Y.Y.Y
    ##
    loadBalancerIPs: []
    ## Load Balancer sources
    ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
    ## Example:
    ## loadBalancerSourceRanges:
    ## - 10.10.10.0/24
    ##
    loadBalancerSourceRanges: []
    ## Array of node ports used for each Kafka broker. Length must be the same as replicaCount
    ## Example:
    ## nodePorts:
    ##   - 30001
    ##   - 30002
    ##
    nodePorts: []
    ## When service type is NodePort, you can specify the domain used for Kafka advertised listeners.
    ## If not specified, the container will try to get the kubernetes node external IP
    ##
    # domain: mydomain.com
    ## Provide any additional annotations which may be required. Evaluated as a template
    ##
    annotations: {}

## Persistence paramaters
##
persistence:
  enabled: true
  ## A manually managed Persistent Volume and Claim
  ## If defined, PVC must be created manually before volume will be bound
  ## The value is evaluated as a template
  ##
  # existingClaim:
  ## PV Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ## set, choosing the default provisioner.
  ##
  # storageClass: "-"
  ## PV Access Mode
  ##
  accessModes:
    - ReadWriteOnce
  ## PVC size
  ##
  size: 30Gi
  ## PVC annotations
  ##
  annotations: {}

## Init Container paramaters
## Change the owner and group of the persistent volume(s) mountpoint(s) to 'runAsUser:fsGroup' on each component
## values from the securityContext section of the component
##
volumePermissions:
  enabled: false
  ## Bitnami Minideb image
  ## ref: https://hub.docker.com/r/bitnami/minideb/tags/
  ##
  image:
    registry: docker.io
    repository: bitnami/minideb
    tag: buster
    ## Specify a imagePullPolicy
    ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
    ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
    ##
    pullPolicy: Always
    ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
    ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
    ## Example:
    ## pullSecrets:
    ##   - myRegistryKeySecretName
    ##
    pullSecrets: []
  ## Init Container resource requests and limits
  ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
  ##
  resources:
    # We usually recommend not to specify default resources and to leave this as a conscious
    # choice for the user. This also increases chances charts run on environments with little
    # resources, such as Minikube. If you do want to specify resources, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
    limits: {}
    #   cpu: 100m
    #   memory: 128Mi
    requests: {}
    #   cpu: 100m
    #   memory: 128Mi

## Kafka pods ServiceAccount
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/
##
serviceAccount:
  ## Specifies whether a ServiceAccount should be created
  ##
  create: true
  ## The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the fluentd.fullname template
  ##
  # name:

## Role Based Access
## ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
  ## Specifies whether RBAC rules should be created
  ## binding Kafka ServiceAccount to a role
  ## that allows Kafka pods querying the K8s API
  ##
  create: true

## Prometheus Exporters / Metrics
##
metrics:
  ## Prometheus Kafka Exporter: exposes complimentary metrics to JMX Exporter
  ##
  kafka:
    enabled: true

    ## Bitnami Kafka exporter image
    ## ref: https://hub.docker.com/r/bitnami/kafka-exporter/tags/
    ##
    image:
      registry: docker.io
      repository: bitnami/kafka-exporter
      tag: 1.2.0-debian-10-r220
      ## Specify a imagePullPolicy
      ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
      ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
      ##
      pullPolicy: IfNotPresent
      ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
      ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
      ## Example:
      ## pullSecrets:
      ##   - myRegistryKeySecretName
      ##
      pullSecrets: []

    ## Extra flags to be passed to Kafka exporter
    ## Example:
    extraFlags:
      tls.insecure-skip-tls-verify: ""

    ## Name of the existing secret containing the optional certificate and key files
    ## for Kafka Exporter client authentication
    ##
    #certificatesSecret:

    ## Prometheus Kafka Exporter' resource requests and limits
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
    resources:
      # We usually recommend not to specify default resources and to leave this as a conscious
      # choice for the user. This also increases chances charts run on environments with little
      # resources, such as Minikube. If you do want to specify resources, uncomment the following
      # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
      limits:
        cpu: 1000m
        memory: 1Gi
      requests:
        cpu: 500m
        memory: 1Gi

    ## Service configuration
    ##
    service:
      ## Kafka Exporter Service type
      ##
      type: ClusterIP
      ## Kafka Exporter Prometheus port
      ##
      port: 9308
      ## Specify the nodePort value for the LoadBalancer and NodePort service types.
      ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
      ##
      nodePort: ""
      ## Set the LoadBalancer service type to internal only.
      ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
      ##
      # loadBalancerIP:
      ## Load Balancer sources
      ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
      ## Example:
      ## loadBalancerSourceRanges:
      ## - 10.10.10.0/24
      ##
      loadBalancerSourceRanges: []
      ## Set the Cluster IP to use
      ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#choosing-your-own-ip-address
      ##
      # clusterIP: None
      ## Annotations for the Kafka Exporter Prometheus metrics service
      ##
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "{{ .Values.metrics.kafka.service.port }}"
        prometheus.io/path: "/metrics"

  ## Prometheus JMX Exporter: exposes the majority of Kafkas metrics
  ##
  jmx:
    enabled: true

    ## Bitnami JMX exporter image
    ## ref: https://hub.docker.com/r/bitnami/jmx-exporter/tags/
    ##
    image:
      registry: docker.io
      repository: bitnami/jmx-exporter
      tag: 0.13.0-debian-10-r73
      ## Specify a imagePullPolicy
      ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent'
      ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images
      ##
      pullPolicy: IfNotPresent
      ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace)
      ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
      ## Example:
      ## pullSecrets:
      ##   - myRegistryKeySecretName
      ##
      pullSecrets: []

    ## Prometheus JMX Exporter' resource requests and limits
    ## ref: http://kubernetes.io/docs/user-guide/compute-resources/
    ##
    resources:
      # We usually recommend not to specify default resources and to leave this as a conscious
      # choice for the user. This also increases chances charts run on environments with little
      # resources, such as Minikube. If you do want to specify resources, uncomment the following
      # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
      limits: {}
      #   cpu: 100m
      #   memory: 128Mi
      requests: {}
      #   cpu: 100m
      #   memory: 128Mi

    ## Service configuration
    ##
    service:
      ## JMX Exporter Service type
      ##
      type: ClusterIP
      ## JMX Exporter Prometheus port
      ##
      port: 5556
      ## Specify the nodePort value for the LoadBalancer and NodePort service types.
      ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport
      ##
      nodePort: ""
      ## Set the LoadBalancer service type to internal only.
      ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#internal-load-balancer
      ##
      # loadBalancerIP:
      ## Load Balancer sources
      ## ref: https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service
      ## Example:
      ## loadBalancerSourceRanges:
      ## - 10.10.10.0/24
      ##
      loadBalancerSourceRanges: []
      ## Set the Cluster IP to use
      ## ref: https://kubernetes.io/docs/concepts/services-networking/service/#choosing-your-own-ip-address
      ##
      # clusterIP: None
      ## Annotations for the JMX Exporter Prometheus metrics service
      ##
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "{{ .Values.metrics.jmx.service.port }}"
        prometheus.io/path: "/"

    ## JMX Whitelist Objects, can be set to control which JMX metrics are exposed. Only whitelisted
    ## values will be exposed via JMX Exporter. They must also be exposed via Rules. To expose all metrics
    ## (warning its crazy excessive and they aren't formatted in a prometheus style) (1) `whitelistObjectNames: []`
    ## (2) commented out above `overrideConfig`.
    ##
    whitelistObjectNames:
      - kafka.controller:*
      - kafka.server:*
      - java.lang:*
      - kafka.network:*
      - kafka.log:*

    ## Prometheus JMX exporter configuration
    ## Specify content for jmx-kafka-prometheus.yml. Evaluated as a template
    ##
    ## Credits to the incubator/kafka chart for the JMX configuration.
    ## https://github.com/helm/charts/tree/master/incubator/kafka
    ##
    config: |-
      jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:5555/jmxrmi
      lowercaseOutputName: true
      lowercaseOutputLabelNames: true
      ssl: false
      {{- if .Values.metrics.jmx.whitelistObjectNames }}
      whitelistObjectNames: ["{{ join "\",\"" .Values.metrics.jmx.whitelistObjectNames }}"]
      {{- end }}

    ## ConfigMap with Prometheus JMX exporter configuration
    ## NOTE: This will override metrics.jmx.config
    ##
    # existingConfigmap:

  ## Prometheus Operator ServiceMonitor configuration
  ##
  serviceMonitor:
    enabled: true
    ## Namespace in which Prometheus is running
    ##
    namespace: xxxxxx

    ## Interval at which metrics should be scraped.
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
    ##
    interval: 10s

    ## Timeout after which the scrape is ended
    ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#endpoint
    ##
    # scrapeTimeout: 10s

    ## ServiceMonitor selector labels
    ## ref: https://github.com/bitnami/charts/tree/master/bitnami/prometheus-operator#prometheus-configuration
    ##
    selector:
      release: prometheus-operator

##
## Zookeeper chart configuration
##
## https://github.com/bitnami/charts/blob/master/bitnami/zookeeper/values.yaml
##
zookeeper:
  enabled: true
  auth:
    ## Enable Zookeeper auth
    ##
    enabled: false
    ## User that will use Zookeeper clients to auth
    ##
    # clientUser:
    ## Password that will use Zookeeper clients to auth
    ##
    # clientPassword:
    ## Comma, semicolon or whitespace separated list of user to be created. Specify them as a string, for example: "user1,user2,admin"
    ##
    # serverUsers:
    ## Comma, semicolon or whitespace separated list of passwords to assign to users when created. Specify them as a string, for example: "pass4user1, pass4user2, pass4admin"
    ##
    # serverPasswords:

## This value is only used when zookeeper.enabled is set to false
##
externalZookeeper:
  ## Server or list of external zookeeper servers to use.
  ##
  servers: []
alvneiayu commented 3 years ago

hi @avanichy25

Thanks for sharing the values.yaml Could you please share the commands you used to create the "kafka-jks" secret? Do you have a truststore, and one keystore per Kafka broker in that secret?

Moreover, please disable metrics exporter to discard possible errors.

Thanks a lot

Álvaro

avanichy25 commented 3 years ago

yes i am having truststore and one keystore per Kafka broker and was working as expected before injecting the proxy sidecar.

openssl req -new -x509 -keyout ca-key -out ca-cert -days 365  \
    -subj "/C=xx =CA/L=xxx/O=xxx/OU=xxx"
keytool -keystore $SERVER_TRUSTSTORE_JKS -alias CARoot -import -file ca-cert 
 keytool -keystore $CLIENT_TRUSTSTORE_JKS -alias CARoot -import -file ca-cert 
keytool -keystore $SERVER_KEYSTORE_JKS -alias localhost -validity 365 -genkey  \
 -dname "CN=xxxx, OU=xxx, O=xxx, L=xxxx, ST=xxxx, C=xxxx"
 keytool -keystore $SERVER_KEYSTORE_JKS -alias localhost -certreq -file cert-file
 openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 365 -CAcreateserial -passin pass:$PASSWORD
 keytool -keystore $SERVER_KEYSTORE_JKS -alias CARoot -import -file ca-cert -storepass $PASSWORD -noprompt
 keytool -keystore $SERVER_KEYSTORE_JKS -alias localhost -import -file cert-signed -storepass $PASSWORD -noprompt
cheskayang commented 3 years ago

running into the same issue, with the release 12.5.0, istio version 1.8.2, using the following values overwriting the default values for the chart, if it helps. Meanwhile, is there any suggestions for workaround? thx a lot!

fullnameOverride: a-cluster-name
replicaCount: 5
updateStrategy: RollingUpdate
nodeSelector:
  pooltype: infra
heapOpts: -Xms600m -Xmx600m
resources:
  limits:
    cpu: 1000m
    memory: 1000Mi
  requests:
    cpu: 500m
    memory: 512Mi
autoCreateTopicsEnable: true
deleteTopicEnable: true
transactionStateLogMinIsr: 3
numPartitions: 5
defaultReplicationFactor: 5
offsetsTopicReplicationFactor: 5
transactionStateLogReplicationFactor: 5
logRetentionHours: 12
maxMessageBytes: "314572800"
socketRequestMaxBytes: "419430400"
persistence:
  enabled: true
  storageClass: "ssd-dynamic"
  size: "10Gi"
pdb:
  create: true
metrics:
  serviceMonitor:
    enabled: true
    scrapeTimeout: 60s
    interval: 60s
    namespace: logmet
    selector:
        release: mon
  jmx:
    enabled: true
livenessProbe:
  failureThreshold: 6
extraEnvVars:
  - name: KAFKA_CFG_REPLICA_FETCH_MAX_BYTES
    value: "419430400"
  - name: KAFKA_JMX_OPTS
    value: "-Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Djava.rmi.server.hostname=127.0.0.1"
zookeeper:
  heapSize: 512
  replicaCount: 5
  nodeSelector:
    pooltype: infra
  persistence:
    enabled: true
    storageClass: "ssd-dynamic"
    size: "4Gi"
  resources:
    limits:
      cpu: 200m
      memory: 512Mi
    requests:
      cpu: 100m
      memory: 256Mi
alvneiayu commented 3 years ago

hi @avanichy25 and @cheskayang

I am trying to reproduce your problem. I will come back with some information soon.

Álvaro

cheskayang commented 3 years ago

update: adding the following to the values file resolved the timeout issue for us.

zookeeper:
  listenOnAllIPs: true
alvneiayu commented 3 years ago

hi @avanichy25

Could you test with the values suggested by @cheskayang, please?

Thanks a lot @cheskayang for sharing your solution.

Álvaro

avanichy25 commented 3 years ago

Hi @alvneiayu and @cheskayang thanks for the update. I tried the same approach making listenOnAllIPs: true in zookeeper but i am still getting the same error as before connect issue from client to server zookeeper.

alvneiayu commented 3 years ago

hi @avanichy25

I am trying to investigate in my side.

Thanks

Álvaro

avanichy25 commented 3 years ago

Hi @alvneiayu

I actually tried to bring kafka without the tls certificate we were using as secret and i am able to bring up kafka with istio-proxy. This is kind of strange as using the tls certificate the connection timeout issue is still persistent.

Thanks !

alvneiayu commented 3 years ago

hi @avanichy25

If you disable Istio, your Kafka (with your configuration) is bringing up without problem using tls certificates? Just to verify it.

Thanks for your time. I will be looking forward for your reply.

Álvaro

avanichy25 commented 3 years ago

Hi @alvneiayu

Yes kafka with tls certificate and disable istio is working fine for me. When i enable istio-proxy then only plain-text works for me.

Thanks !

alvneiayu commented 3 years ago

hi @avanichy25

Sorry for this but we are not officially supporting Istio in our assets. I would suggest you to use Kafka in plain-text with Istio TLS. Istio is including a mTLS layer.

Thanks and sorry again.

Álvaro

github-actions[bot] commented 3 years ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 3 years ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.