Closed Mihai-CMM closed 11 months ago
Hi @Mihai-CMM,
Could you share how are you deploy kafa, and with the environment variables are you using?
Hello Mauraza,
I get [2023-10-03 10:06:18,461] INFO [SocketServer listenerType=ZK_BROKER, nodeId=0] Failed authentication with /10.193.112.41 (channelId=192.168.67.93:9094-10.193.112.41:26985-41) (Authentication failed: Invalid username or password) (org.apache.kafka.common.network.Selector)
Though the user and password are correct on client side test and test
with these values
auth:
clientProtocol: plaintext
externalClientProtocol: sasl
interBrokerProtocol: plaintext
sasl:
mechanisms: plain,scram-sha-256,scram-sha-512
interBrokerMechanism: plain
jaas:
clientUsers:
- test
clientPasswords:
- test
interBrokerUser: admin
interBrokerPassword: ""
zookeeperUser: ""
zookeeperPassword: ""
existingSecret: ""
tls:
type: jks
pemChainIncluded: false
existingSecrets: []
autoGenerated: true
password: ""
existingSecret: ""
jksTruststoreSecret: ""
jksKeystoreSAN: ""
jksTruststore: ""
endpointIdentificationAlgorithm: https
zookeeper:
tls:
enabled: false
type: jks
verifyHostname: true
existingSecret: ""
existingSecretKeystoreKey: zookeeper.keystore.jks
existingSecretTruststoreKey: zookeeper.truststore.jks
passwordsSecret: ""
passwordsSecretKeystoreKey: keystore-password
passwordsSecretTruststoreKey: truststore-password
And I get Caused by: java.lang.SecurityException: java.io.IOException: /opt/bitnami/kafka/config/kafka_jaas.conf (No such file or directory)
with
auth:
clientProtocol: sasl
externalClientProtocol: sasl
interBrokerProtocol: sasl
sasl:
mechanisms: plain,scram-sha-256,scram-sha-512
interBrokerMechanism: plain
jaas:
clientUsers:
- test
clientPasswords:
- test
interBrokerUser: admin
interBrokerPassword: ""
zookeeperUser: ""
zookeeperPassword: ""
existingSecret: ""
tls:
type: jks
pemChainIncluded: false
existingSecrets: []
autoGenerated: true
password: ""
existingSecret: ""
jksTruststoreSecret: ""
jksKeystoreSAN: ""
jksTruststore: ""
endpointIdentificationAlgorithm: https
Thanks
Hi @Mihai-CMM,
I think this issue could be related to this issue. Would you check it?
Hello As far as i understood
- name: KAFKA_CFG_SOCKET_REQUEST_MAX_BYTES
value: "104857600"
- name: KAFKA_CFG_SOCKET_SEND_BUFFER_BYTES
value: "102400"
- name: KAFKA_CFG_ZOOKEEPER_CONNECTION_TIMEOUT_MS
value: "6000"
- name: KAFKA_CFG_AUTHORIZER_CLASS_NAME
- name: KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND
value: "true"
- name: KAFKA_CFG_SUPER_USERS
value: User:admin
on my case KAFKA_CFG_AUTHORIZER_CLASS_NAME env var was empty after helm execution , but even after i add the value
- name: KAFKA_CFG_AUTHORIZER_CLASS_NAME
value: kafka.security.authorizer.AclAuthorizer
in the logs i have the same issue Caused by: java.io.IOException: /opt/bitnami/kafka/config/kafka_jaas.conf (No such file or directory)
Hi @Mihai-CMM,
Did you test with this 🔽 ?
# Configure ACL for super user "user"
- name: KAFKA_CFG_SUPER_USERS
value: "User:user"
Hello,
After redeploying helm chart the authmechanism key is still without value
- name: KAFKA_CFG_AUTHORIZER_CLASS_NAME
- name: KAFKA_CFG_ALLOW_EVERYONE_IF_NO_ACL_FOUND
value: "true"
- name: KAFKA_CFG_SUPER_USERS
value: User:user
with this error
kafka 12:08:22.15 DEBUG ==> Creating user test in zookeeper
Warning: --zookeeper is deprecated and will be removed in a future version of Kafka.
Use --bootstrap-server instead to specify a broker to connect to.
Error while executing config command with args '--zookeeper kafka-zookeeper --alter --add-config SCRAM-SHA-256=[iterations=8192,password=test],SCRAM-SHA-512=[password=test] --entity-type users --entity-name test'
org.apache.kafka.common.KafkaException: Exception while loading Zookeeper JAAS login context [java.security.auth.login.config=/opt/bitnami/kafka/config/kafka_jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client]
at org.apache.kafka.common.security.JaasUtils.isZkSaslEnabled(JaasUtils.java:68)
at kafka.admin.ConfigCommand$.processCommandWithZk(ConfigCommand.scala:116)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:95)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
Caused by: java.lang.SecurityException: java.io.IOException: /opt/bitnami/kafka/config/kafka_jaas.conf (No such file or directory)
After populating the key with
Caused by: java.lang.SecurityException: java.io.IOException: /opt/bitnami/kafka/config/kafka_jaas.conf (No such file or directory)
Hi @Mihai-CMM,
Could you share your values.yaml? I tried to reproduce it, but I can't.
Here it is sorry for delay
global:
imageRegistry: ""
imagePullSecrets: []
storageClass: ""
kubeVersion: "1.25.2"
nameOverride: ""
fullnameOverride: ""
clusterDomain: datalake-in-sit
commonLabels: {}
commonAnnotations: {}
extraDeploy:
serviceBindings:
enabled: false
diagnosticMode:
enabled: false
command:
- sleep
args:
- infinity
image:
registry: docker.io
repository: bitnami/kafka
tag: latest
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
debug: true
config: {}
existingConfigmap: ""
log4j: ""
existingLog4jConfigMap: ""
heapOpts: -Xmx1024m -Xms1024m
deleteTopicEnable: false
autoCreateTopicsEnable: true
logFlushIntervalMessages: _10000
logFlushIntervalMs: 1000
logRetentionBytes: _1073741824
logRetentionCheckIntervalMs: 300000
logRetentionHours: 168
logSegmentBytes: _1073741824
logsDirs: /bitnami/kafka/data
maxMessageBytes: _1000012
defaultReplicationFactor: 1
offsetsTopicReplicationFactor: 1
transactionStateLogReplicationFactor: 1
transactionStateLogMinIsr: 1
numIoThreads: 8
numNetworkThreads: 3
numPartitions: 1
numRecoveryThreadsPerDataDir: 1
socketReceiveBufferBytes: 102400
socketRequestMaxBytes: _104857600
socketSendBufferBytes: 102400
zookeeperConnectionTimeoutMs: 6000
zookeeperChrootPath: ""
authorizerClassName: ""
allowEveryoneIfNoAclFound: true
superUsers: User:user
auth:
clientProtocol: sasl
externalClientProtocol: sasl
interBrokerProtocol: sasl
sasl:
mechanisms: plain,scram-sha-256,scram-sha-512
interBrokerMechanism: scram-sha-256
jaas:
clientUsers:
- test
clientPasswords:
- test
interBrokerUser: admin
interBrokerPassword: admin
zookeeperUser: admin
zookeeperPassword: admin
existingSecret: ""
tls:
type: jks
pemChainIncluded: false
existingSecrets: []
autoGenerated: true
password: ""
existingSecret: ""
jksTruststoreSecret: ""
jksKeystoreSAN: ""
jksTruststore: ""
endpointIdentificationAlgorithm: https
zookeeper:
protocol: sasl
tls:
enabled: false
type: jks
verifyHostname: true
existingSecret: ""
existingSecretKeystoreKey: zookeeper.keystore.jks
existingSecretTruststoreKey: zookeeper.truststore.jks
passwordsSecret: ""
passwordsSecretKeystoreKey: keystore-password
passwordsSecretTruststoreKey: truststore-password
listeners: []
advertisedListeners: []
listenerSecurityProtocolMap: ""
allowPlaintextListener: true
interBrokerListenerName: INTERNAL
command:
- /scripts/setup.sh
args: []
extraEnvVars: []
extraEnvVarsCM: ""
extraEnvVarsSecret: ""
replicaCount: 1
minBrokerId: 0
brokerRackAssignment: ""
containerPorts:
client: 9092
internal: 9093
external: 9094
livenessProbe:
enabled: true
initialDelaySeconds: 10
timeoutSeconds: 5
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
failureThreshold: 6
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
startupProbe:
enabled: false
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 15
successThreshold: 1
customLivenessProbe: {}
customReadinessProbe: {}
customStartupProbe: {}
lifecycleHooks: {}
resources:
limits: {}
requests: {}
podSecurityContext:
enabled: true
fsGroup: 1001
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
allowPrivilegeEscalation: false
hostAliases: []
hostNetwork: false
hostIPC: false
podLabels: {}
podAnnotations: {}
podAffinityPreset: ""
podAntiAffinityPreset: soft
nodeAffinityPreset:
type: ""
key: ""
values: []
affinity: {}
nodeSelector: {"node-role.kubernetes.io/workers": "worker"}
tolerations: []
topologySpreadConstraints: []
terminationGracePeriodSeconds: ""
podManagementPolicy: Parallel
priorityClassName: ""
schedulerName: ""
updateStrategy:
type: RollingUpdate
rollingUpdate: {}
extraVolumeMounts: []
sidecars: []
initContainers: []
pdb:
create: false
minAvailable: ""
maxUnavailable: 1
service:
type: ClusterIP
ports:
client: 9092
internal: 9093
external: 9094
nodePorts:
client: ""
external: ""
sessionAffinity: None
sessionAffinityConfig: {}
clusterIP: ""
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalTrafficPolicy: Cluster
annotations: {}
headless:
publishNotReadyAddresses: false
annotations: {}
labels: {}
extraPorts: []
externalAccess:
enabled: true
autoDiscovery:
enabled: true
image:
registry: docker.io
repository: bitnami/kubectl
tag: 1.25.6-debian-11-r14
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
resources:
limits: {}
requests: {}
service:
type: LoadBalancer
ports:
external: 24224
loadBalancerIPs: []
loadBalancerNames: []
loadBalancerAnnotations: []
loadBalancerSourceRanges: []
nodePorts: []
useHostIPs: false
usePodIPs: false
domain: ""
publishNotReadyAddresses: false
labels: {}
annotations: {}
extraPorts: []
networkPolicy:
enabled: false
allowExternal: true
explicitNamespacesSelector: {}
externalAccess:
from: []
egressRules:
customRules: []
persistence:
enabled: true
existingClaim: ""
storageClass: ""
accessModes:
- ReadWriteOnce
size: 300Gi
annotations: {}
labels: {}
selector: {}
mountPath: /bitnami/kafka
logPersistence:
enabled: true
existingClaim: ""
storageClass: ""
accessModes:
- ReadWriteOnce
size: 20Gi
annotations: {}
selector: {}
mountPath: /opt/bitnami/kafka/logs
volumePermissions:
enabled: false
image:
registry: docker.io
repository: bitnami/bitnami-shell
tag: 11-debian-11-r90
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
resources:
limits: {}
requests: {}
containerSecurityContext:
runAsUser: 0
serviceAccount:
create: true
name: ""
automountServiceAccountToken: true
annotations: {}
rbac:
create: true
metrics:
kafka:
enabled: true
image:
registry: docker.io
repository: bitnami/kafka-exporter
tag: latest
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
certificatesSecret: ""
tlsCert: cert-file
tlsKey: key-file
tlsCaSecret: ""
tlsCaCert: ca-file
extraFlags: {}
command: []
args: []
containerPorts:
metrics: 9308
resources:
limits: {}
requests: {}
podSecurityContext:
enabled: true
fsGroup: 1001
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
hostAliases: []
podLabels: {}
podAnnotations: {}
podAffinityPreset: ""
podAntiAffinityPreset: soft
nodeAffinityPreset:
type: ""
key: ""
values: []
affinity: {}
nodeSelector: {}
tolerations: []
schedulerName: ""
priorityClassName: ""
topologySpreadConstraints: []
extraVolumes: []
extraVolumeMounts: []
sidecars: []
initContainers: []
service:
ports:
metrics: 9308
clusterIP: ""
sessionAffinity: None
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.metrics.kafka.service.ports.metrics }}"
prometheus.io/path: "/metrics"
serviceAccount:
create: true
name: ""
automountServiceAccountToken: true
jmx:
enabled: false
image:
registry: docker.io
repository: bitnami/jmx-exporter
tag: latest
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
containerPorts:
metrics: 5556
resources:
limits: {}
requests: {}
service:
ports:
metrics: 5556
clusterIP: ""
sessionAffinity: None
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.metrics.jmx.service.ports.metrics }}"
prometheus.io/path: "/"
whitelistObjectNames:
- kafka.controller:*
- kafka.server:*
- java.lang:*
- kafka.network:*
- kafka.log:*
config: |-
jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:5555/jmxrmi
lowercaseOutputName: true
lowercaseOutputLabelNames: true
ssl: false
{{- if .Values.metrics.jmx.whitelistObjectNames }}
whitelistObjectNames: ["{{ join "\",\"" .Values.metrics.jmx.whitelistObjectNames }}"]
{{- end }}
existingConfigmap: ""
extraRules: ""
serviceMonitor:
enabled: true
namespace: "monitoring"
interval: ""
scrapeTimeout: ""
labels: {}
selector: {}
relabelings: []
metricRelabelings: []
honorLabels: false
jobLabel: ""
prometheusRule:
enabled: false
namespace: ""
labels: {}
groups: []
provisioning:
enabled: false
numPartitions: 1
replicationFactor: 1
topics: []
nodeSelector: {}
tolerations: []
extraProvisioningCommands: []
parallel: 1
preScript: ""
postScript: ""
auth:
tls:
type: jks
certificatesSecret: ""
cert: tls.crt
key: tls.key
caCert: ca.crt
keystore: keystore.jks
truststore: truststore.jks
passwordsSecret: ""
keyPasswordSecretKey: key-password
keystorePasswordSecretKey: keystore-password
truststorePasswordSecretKey: truststore-password
keyPassword: ""
keystorePassword: ""
truststorePassword: ""
command: []
args: []
extraEnvVars: []
extraEnvVarsCM: ""
extraEnvVarsSecret: ""
podAnnotations: {}
podLabels: {}
serviceAccount:
create: false
name: ""
automountServiceAccountToken: true
resources:
limits: {}
requests: {}
podSecurityContext:
enabled: true
fsGroup: 1001
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
schedulerName: ""
extraVolumes: []
extraVolumeMounts: []
sidecars: []
initContainers: []
waitForKafka: true
zookeeper:
enabled: true
replicaCount: 1
auth:
client:
enabled: false
clientUser: ""
clientPassword: ""
serverUsers: ""
serverPasswords: ""
persistence:
enabled: true
storageClass: ""
accessModes:
- ReadWriteOnce
size: 8Gi
externalZookeeper:
servers: []`
````global:
imageRegistry: ""
imagePullSecrets: []
storageClass: ""
kubeVersion: "1.25.2"
nameOverride: ""
fullnameOverride: ""
clusterDomain: datalake-in-sit
commonLabels: {}
commonAnnotations: {}
extraDeploy:
serviceBindings:
enabled: false
diagnosticMode:
enabled: false
command:
- sleep
args:
- infinity
image:
registry: docker.io
repository: bitnami/kafka
tag: latest
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
debug: true
config: {}
existingConfigmap: ""
log4j: ""
existingLog4jConfigMap: ""
heapOpts: -Xmx1024m -Xms1024m
deleteTopicEnable: false
autoCreateTopicsEnable: true
logFlushIntervalMessages: _10000
logFlushIntervalMs: 1000
logRetentionBytes: _1073741824
logRetentionCheckIntervalMs: 300000
logRetentionHours: 168
logSegmentBytes: _1073741824
logsDirs: /bitnami/kafka/data
maxMessageBytes: _1000012
defaultReplicationFactor: 1
offsetsTopicReplicationFactor: 1
transactionStateLogReplicationFactor: 1
transactionStateLogMinIsr: 1
numIoThreads: 8
numNetworkThreads: 3
numPartitions: 1
numRecoveryThreadsPerDataDir: 1
socketReceiveBufferBytes: 102400
socketRequestMaxBytes: _104857600
socketSendBufferBytes: 102400
zookeeperConnectionTimeoutMs: 6000
zookeeperChrootPath: ""
authorizerClassName: ""
allowEveryoneIfNoAclFound: true
superUsers: User:user
auth:
clientProtocol: sasl
externalClientProtocol: sasl
interBrokerProtocol: sasl
sasl:
mechanisms: plain,scram-sha-256,scram-sha-512
interBrokerMechanism: scram-sha-256
jaas:
clientUsers:
- test
clientPasswords:
- test
interBrokerUser: admin
interBrokerPassword: admin
zookeeperUser: admin
zookeeperPassword: admin
existingSecret: ""
tls:
type: jks
pemChainIncluded: false
existingSecrets: []
autoGenerated: true
password: ""
existingSecret: ""
jksTruststoreSecret: ""
jksKeystoreSAN: ""
jksTruststore: ""
endpointIdentificationAlgorithm: https
zookeeper:
protocol: sasl
tls:
enabled: false
type: jks
verifyHostname: true
existingSecret: ""
existingSecretKeystoreKey: zookeeper.keystore.jks
existingSecretTruststoreKey: zookeeper.truststore.jks
passwordsSecret: ""
passwordsSecretKeystoreKey: keystore-password
passwordsSecretTruststoreKey: truststore-password
listeners: []
advertisedListeners: []
listenerSecurityProtocolMap: ""
allowPlaintextListener: true
interBrokerListenerName: INTERNAL
command:
- /scripts/setup.sh
args: []
extraEnvVars: []
extraEnvVarsCM: ""
extraEnvVarsSecret: ""
replicaCount: 1
minBrokerId: 0
brokerRackAssignment: ""
containerPorts:
client: 9092
internal: 9093
external: 9094
livenessProbe:
enabled: true
initialDelaySeconds: 10
timeoutSeconds: 5
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
failureThreshold: 6
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
startupProbe:
enabled: false
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 15
successThreshold: 1
customLivenessProbe: {}
customReadinessProbe: {}
customStartupProbe: {}
lifecycleHooks: {}
resources:
limits: {}
requests: {}
podSecurityContext:
enabled: true
fsGroup: 1001
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
allowPrivilegeEscalation: false
hostAliases: []
hostNetwork: false
hostIPC: false
podLabels: {}
podAnnotations: {}
podAffinityPreset: ""
podAntiAffinityPreset: soft
nodeAffinityPreset:
type: ""
key: ""
values: []
affinity: {}
nodeSelector: {"node-role.kubernetes.io/workers": "worker"}
tolerations: []
topologySpreadConstraints: []
terminationGracePeriodSeconds: ""
podManagementPolicy: Parallel
priorityClassName: ""
schedulerName: ""
updateStrategy:
type: RollingUpdate
rollingUpdate: {}
extraVolumeMounts: []
sidecars: []
initContainers: []
pdb:
create: false
minAvailable: ""
maxUnavailable: 1
service:
type: ClusterIP
ports:
client: 9092
internal: 9093
external: 9094
nodePorts:
client: ""
external: ""
sessionAffinity: None
sessionAffinityConfig: {}
clusterIP: ""
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalTrafficPolicy: Cluster
annotations: {}
headless:
publishNotReadyAddresses: false
annotations: {}
labels: {}
extraPorts: []
externalAccess:
enabled: true
autoDiscovery:
enabled: true
image:
registry: docker.io
repository: bitnami/kubectl
tag: 1.25.6-debian-11-r14
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
resources:
limits: {}
requests: {}
service:
type: LoadBalancer
ports:
external: 24224
loadBalancerIPs: []
loadBalancerNames: []
loadBalancerAnnotations: []
loadBalancerSourceRanges: []
nodePorts: []
useHostIPs: false
usePodIPs: false
domain: ""
publishNotReadyAddresses: false
labels: {}
annotations: {}
extraPorts: []
networkPolicy:
enabled: false
allowExternal: true
explicitNamespacesSelector: {}
externalAccess:
from: []
egressRules:
customRules: []
persistence:
enabled: true
existingClaim: ""
storageClass: ""
accessModes:
- ReadWriteOnce
size: 300Gi
annotations: {}
labels: {}
selector: {}
mountPath: /bitnami/kafka
logPersistence:
enabled: true
existingClaim: ""
storageClass: ""
accessModes:
- ReadWriteOnce
size: 20Gi
annotations: {}
selector: {}
mountPath: /opt/bitnami/kafka/logs
volumePermissions:
enabled: false
image:
registry: docker.io
repository: bitnami/bitnami-shell
tag: 11-debian-11-r90
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
resources:
limits: {}
requests: {}
containerSecurityContext:
runAsUser: 0
serviceAccount:
create: true
name: ""
automountServiceAccountToken: true
annotations: {}
rbac:
create: true
metrics:
kafka:
enabled: true
image:
registry: docker.io
repository: bitnami/kafka-exporter
tag: latest
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
certificatesSecret: ""
tlsCert: cert-file
tlsKey: key-file
tlsCaSecret: ""
tlsCaCert: ca-file
extraFlags: {}
command: []
args: []
containerPorts:
metrics: 9308
resources:
limits: {}
requests: {}
podSecurityContext:
enabled: true
fsGroup: 1001
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
hostAliases: []
podLabels: {}
podAnnotations: {}
podAffinityPreset: ""
podAntiAffinityPreset: soft
nodeAffinityPreset:
type: ""
key: ""
values: []
affinity: {}
nodeSelector: {}
tolerations: []
schedulerName: ""
priorityClassName: ""
topologySpreadConstraints: []
extraVolumes: []
extraVolumeMounts: []
sidecars: []
initContainers: []
service:
ports:
metrics: 9308
clusterIP: ""
sessionAffinity: None
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.metrics.kafka.service.ports.metrics }}"
prometheus.io/path: "/metrics"
serviceAccount:
create: true
name: ""
automountServiceAccountToken: true
jmx:
enabled: false
image:
registry: docker.io
repository: bitnami/jmx-exporter
tag: latest
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
containerPorts:
metrics: 5556
resources:
limits: {}
requests: {}
service:
ports:
metrics: 5556
clusterIP: ""
sessionAffinity: None
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.metrics.jmx.service.ports.metrics }}"
prometheus.io/path: "/"
whitelistObjectNames:
- kafka.controller:*
- kafka.server:*
- java.lang:*
- kafka.network:*
- kafka.log:*
config: |-
jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:5555/jmxrmi
lowercaseOutputName: true
lowercaseOutputLabelNames: true
ssl: false
{{- if .Values.metrics.jmx.whitelistObjectNames }}
whitelistObjectNames: ["{{ join "\",\"" .Values.metrics.jmx.whitelistObjectNames }}"]
{{- end }}
existingConfigmap: ""
extraRules: ""
serviceMonitor:
enabled: true
namespace: "monitoring"
interval: ""
scrapeTimeout: ""
labels: {}
selector: {}
relabelings: []
metricRelabelings: []
honorLabels: false
jobLabel: ""
prometheusRule:
enabled: false
namespace: ""
labels: {}
groups: []
provisioning:
enabled: false
numPartitions: 1
replicationFactor: 1
topics: []
nodeSelector: {}
tolerations: []
extraProvisioningCommands: []
parallel: 1
preScript: ""
postScript: ""
auth:
tls:
type: jks
certificatesSecret: ""
cert: tls.crt
key: tls.key
caCert: ca.crt
keystore: keystore.jks
truststore: truststore.jks
passwordsSecret: ""
keyPasswordSecretKey: key-password
keystorePasswordSecretKey: keystore-password
truststorePasswordSecretKey: truststore-password
keyPassword: ""
keystorePassword: ""
truststorePassword: ""
command: []
args: []
extraEnvVars: []
extraEnvVarsCM: ""
extraEnvVarsSecret: ""
podAnnotations: {}
podLabels: {}
serviceAccount:
create: false
name: ""
automountServiceAccountToken: true
resources:
limits: {}
requests: {}
podSecurityContext:
enabled: true
fsGroup: 1001
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
schedulerName: ""
extraVolumes: []
extraVolumeMounts: []
sidecars: []
initContainers: []
waitForKafka: true
zookeeper:
enabled: true
replicaCount: 1
auth:
client:
enabled: false
clientUser: ""
clientPassword: ""
serverUsers: ""
serverPasswords: ""
persistence:
enabled: true
storageClass: ""
accessModes:
- ReadWriteOnce
size: 8Gi
externalZookeeper:
servers: []
Hi @Mihai-CMM,
It seems there are two global
in the values you shared. Could you check it?
Sorry was the wrong copy paste when i wanted to beautify the code
It's just one:
grep -v "#" /var/k8s/kafka/kafka.yaml | grep global global:
global:
imageRegistry: ""
imagePullSecrets: []
storageClass: ""
kubeVersion: "1.25.2"
nameOverride: ""
fullnameOverride: ""
clusterDomain: datalake-in-sit
commonLabels: {}
commonAnnotations: {}
extraDeploy:
serviceBindings:
enabled: false
diagnosticMode:
enabled: false
command:
- sleep
args:
- infinity
image:
registry: docker.io
repository: bitnami/kafka
tag: latest
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
debug: true
config: {}
existingConfigmap: ""
log4j: ""
existingLog4jConfigMap: ""
heapOpts: -Xmx1024m -Xms1024m
deleteTopicEnable: false
autoCreateTopicsEnable: true
logFlushIntervalMessages: _10000
logFlushIntervalMs: 1000
logRetentionBytes: _1073741824
logRetentionCheckIntervalMs: 300000
logRetentionHours: 168
logSegmentBytes: _1073741824
logsDirs: /bitnami/kafka/data
maxMessageBytes: _1000012
defaultReplicationFactor: 1
offsetsTopicReplicationFactor: 1
transactionStateLogReplicationFactor: 1
transactionStateLogMinIsr: 1
numIoThreads: 8
numNetworkThreads: 3
numPartitions: 1
numRecoveryThreadsPerDataDir: 1
socketReceiveBufferBytes: 102400
socketRequestMaxBytes: _104857600
socketSendBufferBytes: 102400
zookeeperConnectionTimeoutMs: 6000
zookeeperChrootPath: ""
authorizerClassName: ""
allowEveryoneIfNoAclFound: true
superUsers: User:user
auth:
clientProtocol: sasl
externalClientProtocol: sasl
interBrokerProtocol: sasl
sasl:
mechanisms: plain,scram-sha-256,scram-sha-512
interBrokerMechanism: scram-sha-256
jaas:
clientUsers:
- test
clientPasswords:
- test
interBrokerUser: admin
interBrokerPassword: admin
zookeeperUser: admin
zookeeperPassword: admin
existingSecret: ""
tls:
type: jks
pemChainIncluded: false
existingSecrets: []
autoGenerated: true
password: ""
existingSecret: ""
jksTruststoreSecret: ""
jksKeystoreSAN: ""
jksTruststore: ""
endpointIdentificationAlgorithm: https
zookeeper:
protocol: sasl
tls:
enabled: false
type: jks
verifyHostname: true
existingSecret: ""
existingSecretKeystoreKey: zookeeper.keystore.jks
existingSecretTruststoreKey: zookeeper.truststore.jks
passwordsSecret: ""
passwordsSecretKeystoreKey: keystore-password
passwordsSecretTruststoreKey: truststore-password
listeners: []
advertisedListeners: []
listenerSecurityProtocolMap: ""
allowPlaintextListener: true
interBrokerListenerName: INTERNAL
command:
- /scripts/setup.sh
args: []
extraEnvVars: []
extraEnvVarsCM: ""
extraEnvVarsSecret: ""
replicaCount: 1
minBrokerId: 0
brokerRackAssignment: ""
containerPorts:
client: 9092
internal: 9093
external: 9094
livenessProbe:
enabled: true
initialDelaySeconds: 10
timeoutSeconds: 5
failureThreshold: 3
periodSeconds: 10
successThreshold: 1
readinessProbe:
enabled: true
initialDelaySeconds: 5
failureThreshold: 6
timeoutSeconds: 5
periodSeconds: 10
successThreshold: 1
startupProbe:
enabled: false
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 1
failureThreshold: 15
successThreshold: 1
customLivenessProbe: {}
customReadinessProbe: {}
customStartupProbe: {}
lifecycleHooks: {}
resources:
limits: {}
requests: {}
podSecurityContext:
enabled: true
fsGroup: 1001
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
allowPrivilegeEscalation: false
hostAliases: []
hostNetwork: false
hostIPC: false
podLabels: {}
podAnnotations: {}
podAffinityPreset: ""
podAntiAffinityPreset: soft
nodeAffinityPreset:
type: ""
key: ""
values: []
affinity: {}
nodeSelector: {"node-role.kubernetes.io/workers": "worker"}
tolerations: []
topologySpreadConstraints: []
terminationGracePeriodSeconds: ""
podManagementPolicy: Parallel
priorityClassName: ""
schedulerName: ""
updateStrategy:
type: RollingUpdate
rollingUpdate: {}
extraVolumeMounts: []
sidecars: []
initContainers: []
pdb:
create: false
minAvailable: ""
maxUnavailable: 1
service:
type: ClusterIP
ports:
client: 9092
internal: 9093
external: 9094
nodePorts:
client: ""
external: ""
sessionAffinity: None
sessionAffinityConfig: {}
clusterIP: ""
loadBalancerIP: ""
loadBalancerSourceRanges: []
externalTrafficPolicy: Cluster
annotations: {}
headless:
publishNotReadyAddresses: false
annotations: {}
labels: {}
extraPorts: []
externalAccess:
enabled: true
autoDiscovery:
enabled: true
image:
registry: docker.io
repository: bitnami/kubectl
tag: 1.25.6-debian-11-r14
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
resources:
limits: {}
requests: {}
service:
type: LoadBalancer
ports:
external: 24224
loadBalancerIPs: []
loadBalancerNames: []
loadBalancerAnnotations: []
loadBalancerSourceRanges: []
nodePorts: []
useHostIPs: false
usePodIPs: false
domain: ""
publishNotReadyAddresses: false
labels: {}
annotations: {}
extraPorts: []
networkPolicy:
enabled: false
allowExternal: true
explicitNamespacesSelector: {}
externalAccess:
from: []
egressRules:
customRules: []
persistence:
enabled: true
existingClaim: ""
storageClass: ""
accessModes:
- ReadWriteOnce
size: 300Gi
annotations: {}
labels: {}
selector: {}
mountPath: /bitnami/kafka
logPersistence:
enabled: true
existingClaim: ""
storageClass: ""
accessModes:
- ReadWriteOnce
size: 20Gi
annotations: {}
selector: {}
mountPath: /opt/bitnami/kafka/logs
volumePermissions:
enabled: false
image:
registry: docker.io
repository: bitnami/bitnami-shell
tag: 11-debian-11-r90
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
resources:
limits: {}
requests: {}
containerSecurityContext:
runAsUser: 0
serviceAccount:
create: true
name: ""
automountServiceAccountToken: true
annotations: {}
rbac:
create: true
metrics:
kafka:
enabled: true
image:
registry: docker.io
repository: bitnami/kafka-exporter
tag: latest
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
certificatesSecret: ""
tlsCert: cert-file
tlsKey: key-file
tlsCaSecret: ""
tlsCaCert: ca-file
extraFlags: {}
command: []
args: []
containerPorts:
metrics: 9308
resources:
limits: {}
requests: {}
podSecurityContext:
enabled: true
fsGroup: 1001
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
hostAliases: []
podLabels: {}
podAnnotations: {}
podAffinityPreset: ""
podAntiAffinityPreset: soft
nodeAffinityPreset:
type: ""
key: ""
values: []
affinity: {}
nodeSelector: {}
tolerations: []
schedulerName: ""
priorityClassName: ""
topologySpreadConstraints: []
extraVolumes: []
extraVolumeMounts: []
sidecars: []
initContainers: []
service:
ports:
metrics: 9308
clusterIP: ""
sessionAffinity: None
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.metrics.kafka.service.ports.metrics }}"
prometheus.io/path: "/metrics"
serviceAccount:
create: true
name: ""
automountServiceAccountToken: true
jmx:
enabled: false
image:
registry: docker.io
repository: bitnami/jmx-exporter
tag: latest
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
containerPorts:
metrics: 5556
resources:
limits: {}
requests: {}
service:
ports:
metrics: 5556
clusterIP: ""
sessionAffinity: None
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "{{ .Values.metrics.jmx.service.ports.metrics }}"
prometheus.io/path: "/"
whitelistObjectNames:
- kafka.controller:*
- kafka.server:*
- java.lang:*
- kafka.network:*
- kafka.log:*
config: |-
jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:5555/jmxrmi
lowercaseOutputName: true
lowercaseOutputLabelNames: true
ssl: false
{{- if .Values.metrics.jmx.whitelistObjectNames }}
whitelistObjectNames: ["{{ join "\",\"" .Values.metrics.jmx.whitelistObjectNames }}"]
{{- end }}
existingConfigmap: ""
extraRules: ""
serviceMonitor:
enabled: true
namespace: "monitoring"
interval: ""
scrapeTimeout: ""
labels: {}
selector: {}
relabelings: []
metricRelabelings: []
honorLabels: false
jobLabel: ""
prometheusRule:
enabled: false
namespace: ""
labels: {}
groups: []
provisioning:
enabled: false
numPartitions: 1
replicationFactor: 1
topics: []
nodeSelector: {}
tolerations: []
extraProvisioningCommands: []
parallel: 1
preScript: ""
postScript: ""
auth:
tls:
type: jks
certificatesSecret: ""
cert: tls.crt
key: tls.key
caCert: ca.crt
keystore: keystore.jks
truststore: truststore.jks
passwordsSecret: ""
keyPasswordSecretKey: key-password
keystorePasswordSecretKey: keystore-password
truststorePasswordSecretKey: truststore-password
keyPassword: ""
keystorePassword: ""
truststorePassword: ""
command: []
args: []
extraEnvVars: []
extraEnvVarsCM: ""
extraEnvVarsSecret: ""
podAnnotations: {}
podLabels: {}
serviceAccount:
create: false
name: ""
automountServiceAccountToken: true
resources:
limits: {}
requests: {}
podSecurityContext:
enabled: true
fsGroup: 1001
containerSecurityContext:
enabled: true
runAsUser: 1001
runAsNonRoot: true
schedulerName: ""
extraVolumes: []
extraVolumeMounts: []
sidecars: []
initContainers: []
waitForKafka: true
zookeeper:
enabled: true
replicaCount: 1
auth:
client:
enabled: false
clientUser: ""
clientPassword: ""
serverUsers: ""
serverPasswords: ""
persistence:
enabled: true
storageClass: ""
accessModes:
- ReadWriteOnce
size: 8Gi
externalZookeeper:
servers: []
Hi @Mihai-CMM ,
I tried to install it with the values, but it shows some errors. Could you try with the last version of the chart Kafka? Could you tell us what version are you using?
Thank you very much for your effort
I am using the image with latest tag
kubectl -n load-kafka describe pod kafka-0 | grep -i Image | grep kafka
Image: docker.io/bitnami/kafka:latest
Image ID: docker.io/bitnami/kafka@sha256:2a7a99f58cda458bc07b0c6aaac7ce86861155ea41593d6527038bb35fa5b612
Normal Pulled 16m (x4695 over 16d) kubelet Container image "docker.io/bitnami/kafka:latest" already present on machine
I can test with a specific tag if you know it works well on your side.
Yep; On this commit
$ git log values.yaml
commit 5349a7409eebd30538fed5a06c3b0a9440f06bbd
Author: Bitnami Bot <bitnami-bot@vmware.com>
Date: Wed Mar 1 11:38:01 2023 +0100
[bitnami/kafka] Release 21.1.1 (#15237)
* [bitnami/kafka] Release 21.1.1 updating components versions
Signed-off-by: Bitnami Containers <bitnami-bot@vmware.com>
* Update README.md with readme-generator-for-helm
Signed-off-by: Bitnami Containers <bitnami-bot@vmware.com>
---------
Signed-off-by: Bitnami Containers <bitnami-bot@vmware.com>
Hi @Mihai-CMM
Which version of the chart are you using? The last one is 26.2.0
version: 21.2.0
Hi @Mihai-CMM,
Could you try with the last version of the chart and check if the issue still appears?
Thank you very much. Lets close this one
Name and Version
docker.io/bitnami/kafka:latest"
What architecture are you using?
amd64
What steps will reproduce the bug?
What is the expected behavior?
pods should work
Additional information
latest docker image pushed a day ago