Open lamw opened 4 years ago
@lamw i didn't test this with helm3. Feel free to test and open PR explaining installation with helm3 Thanks
I'm afraid I didn't get too far ... pods never actually end up running. I don't see any troubleshooting steps on what to look for (I've tried both stateful and stateless)
k get po
NAME READY STATUS RESTARTS AGE
my-confluent-oss-canary 0/1 Error 0 98m
my-confluent-oss-cp-control-center-576f48d6f6-khzhl 0/1 CrashLoopBackOff 21 89m
my-confluent-oss-cp-kafka-0 0/2 Pending 0 89m
my-confluent-oss-cp-kafka-connect-6cf6f995d7-z4vrk 1/2 CrashLoopBackOff 21 89m
my-confluent-oss-cp-kafka-rest-785dcc9f66-x994f 1/2 CrashLoopBackOff 19 89m
my-confluent-oss-cp-ksql-server-65c87c8767-zthfh 1/2 CrashLoopBackOff 22 89m
my-confluent-oss-cp-schema-registry-7c778f6b76-sspl4 1/2 Error 22 89m
my-confluent-oss-cp-zookeeper-0 0/2 Pending 0 89m
well, some logs from kafka and zookeeper pods would be useful. it looks like you may not have enough resources
Hi. I am able to deploy using helm v3, but only one pod is in Error
state. While describing the pod, I am not getting any info for the error. What is the use of this pod...
kubectl get pod
NAME READY STATUS RESTARTS AGE ksql-demo 3/3 Running 3 16m my-confluent-oss-canary 0/1 Error 0 31m my-confluent-oss-cp-control-center-54b9dbb596-hd4ln 1/1 Running 4 38m my-confluent-oss-cp-kafka-0 2/2 Running 0 38m my-confluent-oss-cp-kafka-1 2/2 Running 0 36m my-confluent-oss-cp-kafka-2 2/2 Running 0 35m my-confluent-oss-cp-kafka-connect-5d7dcdf579-hc9tc 2/2 Running 4 38m my-confluent-oss-cp-kafka-rest-5d48c7b5d-ff9fx 2/2 Running 1 38m my-confluent-oss-cp-ksql-server-7495dfbb95-dw7br 2/2 Running 3 38m my-confluent-oss-cp-schema-registry-6fb9977bb-9cj8h 2/2 Running 3 38m my-confluent-oss-cp-zookeeper-0 2/2 Running 0 38m my-confluent-oss-cp-zookeeper-1 2/2 Running 0 37m my-confluent-oss-cp-zookeeper-2 2/2 Running 0 36m
helm version
version.BuildInfo{Version:"v3.0.2", GitCommit:"19e47ee3283ae98139d98460de796c1be1e3975f", GitTreeState:"clean", GoVersion:"go1.13.5"}
kubectl describe pod my-confluent-oss-canary
Name: my-confluent-oss-canary Namespace: default Priority: 0 Node: ip-192-168-40-100.ap-south-1.compute.internal/192.168.40.100 Start Time: Tue, 14 Apr 2020 15:38:38 +0530 Labels: <none> Annotations: helm.sh/hook: test-success helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded kubernetes.io/psp: eks.privileged Status: Failed IP: 192.168.48.133 IPs: <none> Containers: my-confluent-oss-canary: Container ID: docker://af19e60239fd83464fd17895214e794f72db7791acb1af22ae90ca980791a125 Image: confluentinc/cp-enterprise-kafka:5.4.1 Image ID: docker-pullable://confluentinc/cp-enterprise-kafka@sha256:5a57bdc93f6a7e0c4e92fb50e254ce47572042a4b9707e149b63085235088498 Port: <none> Host Port: <none> Command: sh -c # Delete the topic if it exists kafka-topics --zookeeper my-confluent-oss-cp-zookeeper-headless:2181 --topic my-confluent-oss-cp-kafka-canary-topic --delete --if-exists # Create the topic kafka-topics --zookeeper my-confluent-oss-cp-zookeeper-headless:2181 --topic my-confluent-oss-cp-kafka-canary-topic --create --partitions 1 --replication-factor 1 --if-not-exists && \ # Create a message MESSAGE="`date -u`" && \ # Produce a test message to the topic echo "$MESSAGE" | kafka-console-producer --broker-list my-confluent-oss-cp-kafka:9092 --topic my-confluent-oss-cp-kafka-canary-topic && \ # Consume a test message from the topic kafka-console-consumer --bootstrap-server my-confluent-oss-cp-kafka-headless:9092 --topic my-confluent-oss-cp-kafka-canary-topic --from-beginning --timeout-ms 2000 | grep "$MESSAGE"
State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 14 Apr 2020 15:38:39 +0530
Finished: Tue, 14 Apr 2020 15:38:48 +0530
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-4lf88 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-4lf88:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-4lf88
Optional: false
QoS Class: BestEffort
Node-Selectors:
Normal Scheduled 32m default-scheduler Successfully assigned default/my-confluent-oss-canary to ip-192-168-40-100.ap-south-1.compute.internal Normal Pulled 32m kubelet, ip-192-168-40-100.ap-south-1.compute.internal Container image "confluentinc/cp-enterprise-kafka:5.4.1" already present on machine Normal Created 32m kubelet, ip-192-168-40-100.ap-south-1.compute.internal Created container my-confluent-oss-canary Normal Started 32m kubelet, ip-192-168-40-100.ap-south-1.compute.internal Started container my-confluent-oss-canary
Using
--name
is no longer valid in latest helm 3:Syntax should be chart name and chart to install: