Closed xiongmaodada closed 5 years ago
@xiongmaodada I will try to reproduce your error, can you please also share the operators log. Thanks
Thank you for your replys quickly.
kubectl logs kafka-operator-operator-0 -c manager -n kafka -f
{"level":"info","ts":1566990081.2534606,"logger":"controller","msg":"Reconciling KafkaCluster","Request.Namespace":"kafka","Request.Name":"kafka"}
{"level":"info","ts":1566990081.4579751,"logger":"controller","msg":"Reconciling KafkaCluster","Request.Namespace":"kafka","Request.Name":"kafka"}
{"level":"info","ts":1566990081.4906683,"logger":"controller","msg":"resource created","Request.Namespace":"kafka","Request.Name":"kafka","component":"kafka","kind":"*v1.Pod"}
{"level":"info","ts":1566990081.5584164,"logger":"controller","msg":"Reconciling KafkaCluster","Request.Namespace":"kafka","Request.Name":"kafka"}
{"level":"info","ts":1566990083.35322,"logger":"controller","msg":"Reconciling KafkaCluster","Request.Namespace":"kafka","Request.Name":"kafka"}
{"level":"info","ts":1566990098.9545534,"logger":"controller","msg":"Reconciling KafkaCluster","Request.Namespace":"kafka","Request.Name":"kafka"}
{"level":"info","ts":1566990099.957768,"logger":"controller","msg":"Reconciling KafkaCluster","Request.Namespace":"kafka","Request.Name":"kafka"}
{"level":"info","ts":1566990100.9538174,"logger":"controller","msg":"Reconciling KafkaCluster","Request.Namespace":"kafka","Request.Name":"kafka"}
{"level":"info","ts":1566990102.053241,"logger":"controller","msg":"Reconciling KafkaCluster","Request.Namespace":"kafka","Request.Name":"kafka"}
{"level":"info","ts":1566990121.5538023,"logger":"controller","msg":"Reconciling KafkaCluster","Request.Namespace":"kafka","Request.Name":"kafka"}
{"level":"info","ts":1566990121.753047,"logger":"controller","msg":"Reconciling KafkaCluster","Request.Namespace":"kafka","Request.Name":"kafka"}
{"level":"info","ts":1566990121.7804203,"logger":"controller","msg":"resource created","Request.Namespace":"kafka","Request.Name":"kafka","component":"kafka","kind":"*v1.Pod"}
{"level":"info","ts":1566990121.9516146,"logger":"controller","msg":"Reconciling KafkaCluster","Request.Namespace":"kafka","Request.Name":"kafka"}
{"level":"info","ts":1566990123.6931918,"logger":"controller","msg":"Reconciling KafkaCluster","Request.Namespace":"kafka","Request.Name":"kafka"}
{"level":"info","ts":1566990137.050859,"logger":"controller","msg":"Reconciling KafkaCluster","Request.Namespace":"kafka","Request.Name":"kafka"}
{"level":"info","ts":1566990138.0411665,"logger":"controller","msg":"Reconciling KafkaCluster","Request.Namespace":"kafka","Request.Name":"kafka"}
{"level":"info","ts":1566990139.1509326,"logger":"controller","msg":"Reconciling KafkaCluster","Request.Namespace":"kafka","Request.Name":"kafka"}
{"level":"info","ts":1566990140.1538386,"logger":"controller","msg":"Reconciling KafkaCluster","Request.Namespace":"kafka","Request.Name":"kafka"}
@xiongmaodada I managed to reproduce your error. The first broker get's OOMKilled.
The used example configuration kubectl create -n kafka -f config/samples/banzaicloud_v1alpha1_kafkacluster.yaml
configures the first broker's container to use only 300M memory which is simply not enough.
I will create a PR which comments the referenced lines out. (It is placed there to show all the available configurations.)
Please remove the referenced block from the CR.
On the other hand I would also suggest to increase your Minikube Memory and CPU size to at least 4 CPUs and 6GB RAM.
@baluchicken thank you, I try it.
Is there a document that test the above Kafka cluster? such as Send and receive messages Part. I don't know how to connect to kafka cluster from external of minikube k8s cluster.
We have something called Spotguides. You can read more about the concept here. We have a Kafka Spotguide which uses this Operator.
Spotguide contains documentation which considers your configuration.
I just copied the relevant part for you:
kubectl create -n kafka -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: kafka-test
spec:
containers:
- name: kafka-test
image: solsson/kafkacat
# Just spin & wait forever
command: [ "/bin/bash", "-c", "--" ]
args: [ "while true; do sleep 3000; done;" ]
volumeMounts:
- name: sslcerts
mountPath: "/ssl/certs"
volumes:
- name: sslcerts
secret:
secretName: test-kafka-operator
EOF
Then exec into the container and produce and consume some messages:
kubectl exec -it -n kafka kafka-test bash
kafkacat -P -b kafka-headless:29092 -t kafka-test \
-X security.protocol=SSL \
-X ssl.key.location=/ssl/certs/clientKey \
-X ssl.certificate.location=/ssl/certs/clientCert \
-X ssl.ca.location=/ssl/certs/caCert
kafkacat -C -b kafka-headless:29092 -t kafka-test \
-X security.protocol=SSL \
-X ssl.key.location=/ssl/certs/clientKey \
-X ssl.certificate.location=/ssl/certs/clientCert \
-X ssl.ca.location=/ssl/certs/caCert
@baluchicken i got it.
how to create topic at external of minikube k8s cluster? for example, there is a bin/kafka-topics.sh at the other machine that is at external of minikube k8s cluster, how to create topic by bin/kafka-topics.sh command?
/bin/kafka-topics.sh --create --zookeeper ip:port --replication-factor 1 --partitions 1 --topic my-kafka-topic
what is zookeeper ip:prt ?
producer:
/bin/kafka-console-producer.sh --broker-list nodeip:port --topic my-kafka-topic
what is zookeeper nodeip:port ?
Unfortunately, ZK is not accessable from outside. It is provisioned by a third party operator which as far as I know does not support this feature yet.
You can use the following command to create topics from inside the cluster.
kubectl -n kafka run kafka-topics -it --image=wurstmeister/kafka:2.12-2.1.0 --rm=true --restart=Never -- /opt/kafka/bin/kafka-topics.sh --zookeeper example-zookeepercluster-client.zookeeper:2181 --topic my-topic --create --partitions 1 --replication-factor 1
@baluchicken Thanks a lot, It's ok!
I have another question, how to use /bin/kafka-console-producer.sh
or /bin/kafka-console-consumer.sh
command to send and receive messages?
@xiongmaodada I just created a simple docs about how to produce/consume messages on a freshly deployed Kafka Cluster. Regarding the java producer/consumer because your cluster is using SSL I recommend to follow the official documentation on keystone/trustrore creation.
@baluchicken It's just what I need, Thank you very much!
Hi @xiongmaodada
Did you ever find the solution for this issue?
When enabling Prometheus annotations for the Kafka nodes (using the operator) one of my nodes become unstable and is terminated after ~30-40 seconds. I don't think this issue is a resource issue as I use the default resource requirements/limits that looks okay.
Its only one node that becomes unstable, and it looks like the operator is gracefully terminating the node.
My issue: https://github.com/banzaicloud/koperator/issues/659
Describe the bug
intall steps
step 1 start k8s cluster by minikube:
step 2 install Zookeeper:
step3 minikube LoadBalancer:
step 4 install kafka:
the bug
a kafka pod status is initially Init:0/3, and is running after a while. the other kafka pod is init:0/3 after 35s,...repeatedly.
first pod
kafka8w4h9
is Init:0/3 :the fist pod
kafka8w4h9
is running after a whilethe fist pod
kafka8w4h9
disappear and the podkafkadbct9
is Init:0/3:this process is constantly repeating.
the fist pod
kafka8w4h9
log: