grepplabs / kafka-proxy

Proxy connections to Kafka cluster. Connect through SOCKS Proxy, HTTP Proxy or to cluster running in Kubernetes.
Apache License 2.0
501 stars 87 forks source link

Kafka proxy inside eks cluster exposed to localhost not working #124

Open I-Abdelhamid opened 1 year ago

I-Abdelhamid commented 1 year ago

Hello, I have eks cluster that have access to msk. I deployed the following according to the documentation

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
   name: kafka-proxy
spec:
  selector:
    matchLabels:
      app: kafka-proxy
  replicas: 1
  serviceName: kafka-proxy
  template:
    metadata:
      labels:
        app: kafka-proxy
    spec:
      containers:
        - name: kafka-proxy
          image: grepplabs/kafka-proxy:latest
          args:
            - 'server'
            - '--log-format=json'
            - '--bootstrap-server-mapping=************,127.0.0.1:32400'
            - '--bootstrap-server-mapping=************,127.0.0.1:32401'
            - '--bootstrap-server-mapping=************,127.0.0.1:32402'
            - '--proxy-request-buffer-size=32768'
            - '--proxy-response-buffer-size=32768'
            - '--proxy-listener-read-buffer-size=32768'
            - '--proxy-listener-write-buffer-size=131072'
            - '--kafka-connection-read-buffer-size=131072'
            - '--kafka-connection-write-buffer-size=32768'
          ports:
          - name: metrics
            containerPort: 9080
          - name: kafka-0
            containerPort: 32400
          - name: kafka-1
            containerPort: 32401
          - name: kafka-2
            containerPort: 32402
          livenessProbe:
            httpGet:
              path: /health
              port: 9080
            initialDelaySeconds: 5
            periodSeconds: 3
          readinessProbe:
            httpGet:
              path: /health
              port: 9080
            initialDelaySeconds: 5
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 2
            failureThreshold: 5
          resources:
            requests:
              memory: 128Mi
              cpu: 1000m
      restartPolicy: Always

now when I port forward according to the documentation to my localhost machine kubectl port-forward kafka-proxy-0 32400:32400 32401:32401 32402:32402 then try kcat -b localhost:32400,localhost:32401,localhost:32402 -t topic1 it gives me the timing-out and the following

%3|1675126714.187|FAIL|rdkafka#consumer-1| [thrd:0.0.0.0:44357/1]: 0.0.0.0:44357/1: Connect to ipv4#0.0.0.0:44357 failed: Connection refused (after 0ms in state CONNECT)
%3|1675126714.187|FAIL|rdkafka#consumer-1| [thrd:0.0.0.0:35303/2]: 0.0.0.0:35303/2: Connect to ipv4#0.0.0.0:35303 failed: Connection refused (after 0ms in state CONNECT)
% ERROR: Local: Broker transport failure: 0.0.0.0:44357/1: Connect to ipv4#0.0.0.0:44357 failed: Connection refused (after 0ms in state CONNECT)
% ERROR: Local: Broker transport failure: 0.0.0.0:35303/2: Connect to ipv4#0.0.0.0:35303 failed: Connection refused (after 0ms in state CONNECT)
%3|1675126714.341|FAIL|rdkafka#consumer-1| [thrd:0.0.0.0:35303/2]: 0.0.0.0:35303/2: Connect to ipv4#0.0.0.0:35303 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
% ERROR: Local: Broker transport failure: 0.0.0.0:35303/2: Connect to ipv4#0.0.0.0:35303 failed: Connection refused (after 0ms in state CONNECT, 1 identical error(s) suppressed)
%3|1675126714.403|FAIL|rdkafka#consumer-1| [thrd:0.0.0.0:44357/1]: 0.0.0.0:44357/1: Connect to ipv4#0.0.0.0:44357 failed: Connection refused (after 1ms in state CONNECT, 1 identical error(s) suppressed)
% ERROR: Local: Broker transport failure: 0.0.0.0:44357/1: Connect to ipv4#0.0.0.0:44357 failed: Connection refused (after 1ms in state CONNECT, 1 identical error(s) suppressed)

Notes: No auth needed from eks pod side can't expose the proxy through elb, I can only use port forward for local use

chalimbu commented 5 months ago

Hi had the same problem and it's likely becuase of the 127.0.0.1 even though it works locally in Kubernetes is not going to bind to all the ips. tyr changing it to 0.0.0.0