factorhouse / kpow-helm-charts

Run Kpow for Apache Kafka in Kubernetes
Apache License 2.0
7 stars 5 forks source link

Serve kpow under subpath #8

Closed miguel-cardoso-mindera closed 1 year ago

miguel-cardoso-mindera commented 1 year ago

Hello, I'm deploying kpow on my kubernetes cluster like this:

image:
  repository: factorhouse/kpow-ce
  tag: 91.4.1
env:
  LICENSE_ID: "XXXXXXXXXXXXXXXXXXXXXXX"
  LICENSE_CODE: "KPOW_COMMUNITY"
  LICENSEE: "XXXX"
  LICENSE_EXPIRY: "2024-05-31"
  LICENSE_SIGNATURE: "XXXXXXXXXXXXXXXXXXXXXXX"
  BOOTSTRAP: "http://cluster-kafka-bootstrap.kafka:9092"
  SECURITY_PROTOCOL: "PLAINTEXT"
  SASL_MECHANISM: "PLAIN"
  LICENSE_CREDITS: "1"

  SCHEMA_REGISTRY_URL: "http://schema-registry.kafka:8081"
  SCHEMA_REGISTRY_NAME: "schema-registry"

And if I port-forward the kpow pod, I can access the UI just fine.

However I want to present this under an api-gateway, so that people can access kpow under https://my-api-gateway.com/kpow but I do not see how I can make this possible.

Is this a feature that currently exists?

wavejumper commented 1 year ago

Hi @miguel-cardoso-mindera - we have documentation about how to configure a k8s ingress for your exact scenario.

https://docs.kpow.io/installation/deployment-notes/#k8s-ingress-configuration

This might be useful, but I'll have to investigate if our helm charts will need any code changes to support the above configuration.

miguel-cardoso-mindera commented 1 year ago

Hey @wavejumper, I'm on GKE so an ingress would create a LoadBalancer which is not desirable. So I tried the nginx config like this:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-conf
  namespace: kafka
data:
  nginx.conf: |
    user nginx;
    worker_processes  3;
    error_log  /var/log/nginx/error.log;
    events {
      worker_connections  10240;
    }
    http {
      server {
        listen       80;
        server_name  localhost;

        location /kpow/  {
            proxy_pass http://kpow.kafka:3000/;
            rewrite /kpow/(.*) /$1 break;
            proxy_redirect http://kpow.kafka:3000/ /kpow/;

            proxy_http_version 1.1;

            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection 'upgrade';
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $http_host;
            proxy_cache_bypass $http_upgrade;
            proxy_headers_hash_max_size 512;
            proxy_headers_hash_bucket_size 128;
          }
      }
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: kafka
spec:
  replicas: 1
  selector:
    matchLabels:
       app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
        volumeMounts:
        - mountPath: /etc/nginx # mount nginx-conf volumn to /etc/nginx
          readOnly: true
          name: nginx-conf
        - mountPath: /var/log/nginx
          name: log
      volumes:
      - name: nginx-conf
        configMap:
          name: nginx-conf # place ConfigMap `nginx-conf` on /etc/nginx
          items:
            - key: nginx.conf
              path: nginx.conf
      - name: log
        emptyDir: {}
---
kind: Service
apiVersion: v1
metadata:
  name: kpow-outside
  namespace: kafka
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 30027
    nodePort: 30027
    targetPort: 80

However when connecting I see the page title gets loaded correctly but the page content is empty

image
d-t-w commented 1 year ago

Hi @miguel-cardoso-mindera, it may be that your websocket upgrade is not succeeding, to debug, can you:

Screen Shot 2023-06-28 at 11 32 20 pm Screen Shot 2023-06-28 at 11 29 29 pm

These interactions are taken from https://demo.kpow.io and show a successful websocket handshake by Kpow.

When running with a reverse proxy the cause of any UI issues is often this websocket upgrade being successful through the proxy, you additionally have a subpath proxy applied, which complicates matters a bit more.

As a starter, I'd suggest running Kpow with the reverse proxy withouh a subpath rewrite to see if that succeeds, that also narrows down the problem somewhat.

d-t-w commented 1 year ago

Also just to add, these configurations fall outside the scope of our helm charts, the charts themselves just run Kpow in a pod with the UI on port 3000 - wiring up ingress differs per user and we don't offer an out-of-the-box chart config for that.

miguel-cardoso-mindera commented 1 year ago

I believe you are on to something, here are the requested prints: Headers:

image

Messages:

image

I understand and appreciate the help, I'm happy to move the discussion elsewhere if that helps

d-t-w commented 1 year ago

Thanks @miguel-cardoso-mindera.

Are there any WARN/ERROR logs in the Kpow application logs? It's possible we're logging a message send failure in Kpow itself, it would be useful to see them if so.

One thing that I notice, in your response headers you are missing this header:

Sec-Websocket-Extensions: permessage-deflate

That header is in the example I provided from our demo environment, it's pretty important because it tells the browser to decompress each websocket message that it receives.

I think it's possible nginx (or something else between you and Kpow) is stripping that from the response headers. That could cause the lack of messages, as they browser wont know to decompress them.

Similar to this: https://stackoverflow.com/questions/29106033/nginx-not-passing-headers-to-websocket-endpoint

I would expect them to remain in the websocket response, it could be a nginx version thing I guess, you might try adding this to your configuration:

        proxy_set_header Sec-WebSocket-Protocol $http_sec_websocket_protocol;
        proxy_set_header Sec-WebSocket-Extensions $http_sec_websocket_extensions;
        proxy_set_header Sec-WebSocket-Key $http_sec_websocket_key;
        proxy_set_header Sec-WebSocket-Version $http_sec_websocket_version;

I suspect you might find some logs related that either in Kpow or in nginx itself.

miguel-cardoso-mindera commented 1 year ago

hey @d-t-w

Kpow does not have relevant logs I think, I'll paste an excerpt here just in case:

│ 16:00:30.108 INFO  [OperatrScheduler_Worker-12] operatr.compute.v3.materialization – 594a0086-0801-45f8-b165-5fe31269ea0f materializing [376] [:schema "confluent1-1bbd90dc73ac201b559b"]                                  │
│ 16:00:30.110 INFO  [OperatrScheduler_Worker-12] operatr.compute.v3.materialization – 594a0086-0801-45f8-b165-5fe31269ea0f materializing [375] [:schema "confluent1-1bbd90dc73ac201b559b"] :kafka/subject __kpow_global     │
│ 16:00:30.110 INFO  [OperatrScheduler_Worker-12] operatr.compute.v3.materialization – 594a0086-0801-45f8-b165-5fe31269ea0f materializing [1] [:schema "confluent1-1bbd90dc73ac201b559b"] :kafka/schema __kpow_global        │
│ 16:00:30.110 INFO  [OperatrScheduler_Worker-12] operatr.observe.schema – confluent1-1bbd90dc73ac201b559b: [376] :observe/schema telemetry snapshots captured in 105 / 5 ms                                                 │
│ 16:00:32.373 INFO  [pool-3-thread-5] operatr.server.query-handlers – New WS connection for client-id 9327dd85-fff2-4c71-b9ff-62a03cae8be3                                                                                  │
│ 16:00:38.086 INFO  [pool-3-thread-7] operatr.server.query-handlers – New WS connection for client-id 9327dd85-fff2-4c71-b9ff-62a03cae8be3                                                                                  │
│ 16:00:43.729 INFO  [pool-3-thread-1] operatr.server.query-handlers – New WS connection for client-id 9327dd85-fff2-4c71-b9ff-62a03cae8be3                                                                                  │
│ 16:00:49.449 INFO  [pool-3-thread-3] operatr.server.query-handlers – New WS connection for client-id 9327dd85-fff2-4c71-b9ff-62a03cae8be3                                                                                  │
│ 16:00:55.164 INFO  [pool-3-thread-5] operatr.server.query-handlers – New WS connection for client-id 9327dd85-fff2-4c71-b9ff-62a03cae8be3                                                                                  │
│ 16:01:00.852 INFO  [pool-3-thread-7] operatr.server.query-handlers – New WS connection for client-id 9327dd85-fff2-4c71-b9ff-62a03cae8be3                                                                                  │
│ 16:01:05.442 INFO  [OperatrScheduler_Worker-14] operatr.compute.v3.materialization – 67c96ffa-23b8-4a12-95ff-f00733349b8c materialize [2867] {:id "JHK80z1ERWOpMx6yk0PVrA", :domain :cluster, :observation :observe/kafka- │
│ 16:01:05.608 INFO  [OperatrScheduler_Worker-14] operatr.compute.v3.materialization – 67c96ffa-23b8-4a12-95ff-f00733349b8c produce [634] :cluster JHK80z1ERWOpMx6yk0PVrA simple metrics                                     │
│ 16:01:05.654 INFO  [OperatrScheduler_Worker-14] operatr.compute.v3.materialization – 67c96ffa-23b8-4a12-95ff-f00733349b8c produce [1355] :cluster JHK80z1ERWOpMx6yk0PVrA simple metrics                                    │
│ 16:01:05.654 INFO  [OperatrScheduler_Worker-14] operatr.compute.v3.materialization – 67c96ffa-23b8-4a12-95ff-f00733349b8c materializing [2867] [:cluster "JHK80z1ERWOpMx6yk0PVrA"]                                         │
│ 16:01:05.710 INFO  [OperatrScheduler_Worker-14] operatr.compute.v3.materialization – 67c96ffa-23b8-4a12-95ff-f00733349b8c materializing [2500] [:cluster "JHK80z1ERWOpMx6yk0PVrA"] :kafka/topic-summary __kpow_global      │
│ 16:01:06.021 INFO  [OperatrScheduler_Worker-14] operatr.compute.v3.materialization – 67c96ffa-23b8-4a12-95ff-f00733349b8c materializing [676] [:cluster "JHK80z1ERWOpMx6yk0PVrA"] :kafka/group-summary __kpow_global       │
│ 16:01:06.135 INFO  [OperatrScheduler_Worker-14] operatr.compute.v3.materialization – 67c96ffa-23b8-4a12-95ff-f00733349b8c materializing [676] [:cluster "JHK80z1ERWOpMx6yk0PVrA"] :kafka/simple-consumer-summary __kpow_gl │
│ 16:01:06.135 INFO  [OperatrScheduler_Worker-14] operatr.compute.v3.materialization – 67c96ffa-23b8-4a12-95ff-f00733349b8c materializing [1] [:cluster "JHK80z1ERWOpMx6yk0PVrA"] :kafka/cluster __kpow_global               │
│ 16:01:06.157 INFO  [OperatrScheduler_Worker-14] operatr.compute.v3.materialization – 67c96ffa-23b8-4a12-95ff-f00733349b8c produce [646] [:cluster "JHK80z1ERWOpMx6yk0PVrA" :kafka/topic-summary] ["__kpow_global"] materia │
│ 16:01:06.158 INFO  [OperatrScheduler_Worker-14] operatr.compute.v3.materialization – 67c96ffa-23b8-4a12-95ff-f00733349b8c produce [2] [:cluster "JHK80z1ERWOpMx6yk0PVrA" :kafka/simple-consumer-summary] ["__kpow_global"] │
│ 16:01:06.177 INFO  [OperatrScheduler_Worker-14] operatr.compute.v3.materialization – 67c96ffa-23b8-4a12-95ff-f00733349b8c produce [672] [:cluster "JHK80z1ERWOpMx6yk0PVrA" :kafka/group-summary] ["__kpow_global"] materia │
│ 16:01:06.178 INFO  [OperatrScheduler_Worker-14] operatr.observe.kafka – cluster JHK80z1ERWOpMx6yk0PVrA: [2867] :observe/kafka telemetry snapshots captured in 400 / 778 ms                                                 │
│ 16:01:06.726 INFO  [pool-3-thread-1] operatr.server.query-handlers – New WS connection for client-id 9327dd85-fff2-4c71-b9ff-62a03cae8be3                                                                                  │
│ 16:01:12.706 INFO  [pool-3-thread-3] operatr.server.query-handlers – New WS connection for client-id 9327dd85-fff2-4c71-b9ff-62a03cae8be3                                                                                  │
│ 16:01:18.720 INFO  [pool-3-thread-5] operatr.server.query-handlers – New WS connection for client-id 9327dd85-fff2-4c71-b9ff-62a03cae8be3

As for nginx:

│ 10.220.96.12 - - [28/Jun/2023:16:00:43 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff-62a03cae8be3&csrf-token=BPRi5ejnm17XYlvd9VNRgJpzn4M0JgJ1L4HWONOb1rQw%2FpUpoWeA9LySDmR8dr4oj8VIdNTLTQrPkRa0 HTTP/1.1" 101 7 │
│ 2023/06/28 16:00:49 [info] 21#21: *47 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.69, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=93 │
│ 10.220.96.69 - - [28/Jun/2023:16:00:49 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff-62a03cae8be3&csrf-token=BPRi5ejnm17XYlvd9VNRgJpzn4M0JgJ1L4HWONOb1rQw%2FpUpoWeA9LySDmR8dr4oj8VIdNTLTQrPkRa0 HTTP/1.1" 101 7 │
│ 2023/06/28 16:00:55 [info] 21#21: *49 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.32, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=93 │
│ 10.220.96.32 - - [28/Jun/2023:16:00:55 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff-62a03cae8be3&csrf-token=BPRi5ejnm17XYlvd9VNRgJpzn4M0JgJ1L4HWONOb1rQw%2FpUpoWeA9LySDmR8dr4oj8VIdNTLTQrPkRa0 HTTP/1.1" 101 7 │
│ 2023/06/28 16:01:00 [info] 21#21: *51 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.49, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=93 │
│ 10.220.96.49 - - [28/Jun/2023:16:01:00 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff-62a03cae8be3&csrf-token=BPRi5ejnm17XYlvd9VNRgJpzn4M0JgJ1L4HWONOb1rQw%2FpUpoWeA9LySDmR8dr4oj8VIdNTLTQrPkRa0 HTTP/1.1" 101 7 │
│ 2023/06/28 16:01:06 [info] 21#21: *53 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.43, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=93 │
│ 10.220.96.43 - - [28/Jun/2023:16:01:06 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff-62a03cae8be3&csrf-token=BPRi5ejnm17XYlvd9VNRgJpzn4M0JgJ1L4HWONOb1rQw%2FpUpoWeA9LySDmR8dr4oj8VIdNTLTQrPkRa0 HTTP/1.1" 101 7 │
│ 2023/06/28 16:01:12 [info] 21#21: *55 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.43, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=93 │
│ 10.220.96.43 - - [28/Jun/2023:16:01:12 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff-62a03cae8be3&csrf-token=BPRi5ejnm17XYlvd9VNRgJpzn4M0JgJ1L4HWONOb1rQw%2FpUpoWeA9LySDmR8dr4oj8VIdNTLTQrPkRa0 HTTP/1.1" 101 7 │
│ 2023/06/28 16:01:18 [info] 21#21: *57 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.49, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=93 │
│ 10.220.96.49 - - [28/Jun/2023:16:01:18 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff-62a03cae8be3&csrf-token=BPRi5ejnm17XYlvd9VNRgJpzn4M0JgJ1L4HWONOb1rQw%2FpUpoWeA9LySDmR8dr4oj8VIdNTLTQrPkRa0 HTTP/1.1" 101 7 │
│ 2023/06/28 16:01:24 [info] 21#21: *59 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.8, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=932 │
│ 10.220.96.8 - - [28/Jun/2023:16:01:24 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff-62a03cae8be3&csrf-token=BPRi5ejnm17XYlvd9VNRgJpzn4M0JgJ1L4HWONOb1rQw%2FpUpoWeA9LySDmR8dr4oj8VIdNTLTQrPkRa0 HTTP/1.1" 101 71 │
│ 2023/06/28 16:01:30 [info] 21#21: *61 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.71, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=93 │
│ 10.220.96.71 - - [28/Jun/2023:16:01:30 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff-62a03cae8be3&csrf-token=BPRi5ejnm17XYlvd9VNRgJpzn4M0JgJ1L4HWONOb1rQw%2FpUpoWeA9LySDmR8dr4oj8VIdNTLTQrPkRa0 HTTP/1.1" 101 7 │
│ 2023/06/28 16:01:36 [info] 21#21: *63 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.46, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=93 │
│ 10.220.96.46 - - [28/Jun/2023:16:01:36 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff-62a03cae8be3&csrf-token=BPRi5ejnm17XYlvd9VNRgJpzn4M0JgJ1L4HWONOb1rQw%2FpUpoWeA9LySDmR8dr4oj8VIdNTLTQrPkRa0 HTTP/1.1" 101 7 │
│ 2023/06/28 16:01:42 [info] 21#21: *65 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.52, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=93 │
│ 10.220.96.52 - - [28/Jun/2023:16:01:42 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff-62a03cae8be3&csrf-token=BPRi5ejnm17XYlvd9VNRgJpzn4M0JgJ1L4HWONOb1rQw%2FpUpoWeA9LySDmR8dr4oj8VIdNTLTQrPkRa0 HTTP/1.1" 101 7 │
│ 2023/06/28 16:01:48 [info] 21#21: *67 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.29, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=93 │
│ 10.220.96.29 - - [28/Jun/2023:16:01:48 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff-62a03cae8be3&csrf-token=BPRi5ejnm17XYlvd9VNRgJpzn4M0JgJ1L4HWONOb1rQw%2FpUpoWeA9LySDmR8dr4oj8VIdNTLTQrPkRa0 HTTP/1.1" 101 7 │
│ 2023/06/28 16:01:54 [info] 21#21: *69 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.29, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=93 │
│ 10.220.96.29 - - [28/Jun/2023:16:01:54 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff-62a03cae8be3&csrf-token=BPRi5ejnm17XYlvd9VNRgJpzn4M0JgJ1L4HWONOb1rQw%2FpUpoWeA9LySDmR8dr4oj8VIdNTLTQrPkRa0 HTTP/1.1" 101 7 │
│ 2023/06/28 16:02:00 [info] 22#22: *71 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.8, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=932 │
│ 10.220.96.8 - - [28/Jun/2023:16:02:00 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff-62a03cae8be3&csrf-token=BPRi5ejnm17XYlvd9VNRgJpzn4M0JgJ1L4HWONOb1rQw%2FpUpoWeA9LySDmR8dr4oj8VIdNTLTQrPkRa0 HTTP/1.1" 101 71 │
│ 2023/06/28 16:02:06 [info] 22#22: *73 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.46, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=93 │
│ 10.220.96.46 - - [28/Jun/2023:16:02:06 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff-62a03cae8be3&csrf-token=BPRi5ejnm17XYlvd9VNRgJpzn4M0JgJ1L4HWONOb1rQw%2FpUpoWeA9LySDmR8dr4oj8VIdNTLTQrPkRa0 HTTP/1.1" 101 7 │
│ 2023/06/28 16:02:12 [info] 22#22: *75 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.12, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=93 │
│ 10.220.96.12 - - [28/Jun/2023:16:02:12 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff-62a03cae8be3&csrf-token=BPRi5ejnm17XYlvd9VNRgJpzn4M0JgJ1L4HWONOb1rQw%2FpUpoWeA9LySDmR8dr4oj8VIdNTLTQrPkRa0 HTTP/1.1" 101 7 │
│ 2023/06/28 16:02:18 [info] 22#22: *77 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.52, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=93 │
│ 10.220.96.52 - - [28/Jun/2023:16:02:18 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff-62a03cae8be3&csrf-token=BPRi5ejnm17XYlvd9VNRgJpzn4M0JgJ1L4HWONOb1rQw%2FpUpoWeA9LySDmR8dr4oj8VIdNTLTQrPkRa0 HTTP/1.1" 101 7 │
│ 2023/06/28 16:02:24 [info] 22#22: *79 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.69, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=93 │
│ 10.220.96.69 - - [28/Jun/2023:16:02:24 +0000] "GET /kpow/chsk?client-id=9327dd85-fff2-4c71-b9ff

Regarding the websocket headers, I did found some in the Request Headers:

Screenshot 2023-06-28 at 17 00 23
d-t-w commented 1 year ago

Hey @miguel-cardoso-mindera,

We can see in the Kpow logs that a new WS connection is attempted:

16:01:06.726 INFO  [pool-3-thread-1] operatr.server.query-handlers – New WS connection for client-id 9327dd85-fff2-4c71-b9ff-62a03cae8be3                                      

Then from the nginx logs that the connection is dropped by client, your browser in this case I think

2023/06/28 16:00:49 [info] 21#21: *47 recv() failed (104: Connection reset by peer) while proxying upgraded connection, client: 10.220.96.69, server: XXXXXXX.com, request: "GET /kpow/chsk?client-id=93 │

You are missing headers in the response, did you try setting the proxy_set_headers fields in your nginx? It's possible some other part of your network insfrastructure is stripping/adding headers or disallowing websocket traffic.

d-t-w commented 1 year ago

I'll close this ticket now because:

  1. Kpow is starting and running correctly, with the UI served on the pod on port 3000
  2. When you port-forward to the pod you can access the Kpow UI fine
  3. There is some network / ingress issue causing the connection reset by peer seen in the nginx logs
  4. This becomes a general "How do I allow access to a websocket UI in GCP/GKE" question

At a guess there will be some further networking required for you to configure external access to a kube pod in GKE that allows websocket connects beyond simply configuring nginx. I see you are trying to avoid the load balancer for GKE ingress but also websocket support is explicitly listed as pretty much the first item in their docs:

https://cloud.google.com/kubernetes-engine/docs/concepts/ingress-xlb

Editing this comment to add some further Nginx/GKE:WS examples/docs:

StackOverflow: Enabled Secure Websocket Ingress on GKE suggests adding an annotation to your metadata.

Working example of Nginx Ingress + NodeJS WS has the same annotation under common issues.