which I did in the recommended order with my Rancher provisioned K8s via rke up (using Digital Ocean droplets)
kubectl apply -f 00-namespace.yml
namespace/kafka created
kubectl apply -f rbac-namespace-default
clusterrole.rbac.authorization.k8s.io/node-reader created
clusterrolebinding.rbac.authorization.k8s.io/kafka-node-reader created
role.rbac.authorization.k8s.io/pod-labler created
rolebinding.rbac.authorization.k8s.io/kafka-pod-labler created
kubectl apply -f zookeeper
configmap/zookeeper-config created
service/pzoo created
service/zoo created
service/zookeeper created
statefulset.apps/pzoo created
statefulset.apps/zoo created
kubectl apply -f kafka
configmap/broker-config created
service/kafka created
service/bootstrap created
statefulset.apps/kafka created
But nothing is produced?
kubectl get all -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.43.0.1 443/TCP 4d
After forking and without making any changes:
Getting started "We suggest you apply -f manifests in the following order:"
namespace ./rbac-namespace-default ./zookeeper ./kafka
which I did in the recommended order with my Rancher provisioned K8s via rke up (using Digital Ocean droplets)
kubectl apply -f 00-namespace.yml namespace/kafka created
kubectl apply -f rbac-namespace-default clusterrole.rbac.authorization.k8s.io/node-reader created clusterrolebinding.rbac.authorization.k8s.io/kafka-node-reader created role.rbac.authorization.k8s.io/pod-labler created rolebinding.rbac.authorization.k8s.io/kafka-pod-labler created
kubectl apply -f zookeeper configmap/zookeeper-config created service/pzoo created service/zoo created service/zookeeper created statefulset.apps/pzoo created statefulset.apps/zoo created
kubectl apply -f kafka
configmap/broker-config created service/kafka created service/bootstrap created statefulset.apps/kafka created
But nothing is produced? kubectl get all -o wide 443/TCP 4d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.43.0.1
kubectl cluster-info
Kubernetes control plane is running at https://46.101.XX.XX:6443 CoreDNS is running at https://46.101.XX.XX:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
What did I forget to perform to get Kafka running on top of K8s cluster? Thanks