redpanda-data / redpanda

Redpanda is a streaming data platform for developers. Kafka API compatible. 10x faster. No ZooKeeper. No JVM!
https://redpanda.com
9.65k stars 589 forks source link

Deployment in Kubernetes with subdomain fails #1864

Closed jitsejan closed 1 year ago

jitsejan commented 3 years ago

Currently I am playing with Redpanda deployed on Kubernetes through Ansible. When setting up the cluster the ports for the two nodes in the cluster are assigned automatically and passed through a variable in Ansible to my Jupyter notebook can connect. This all seems to work fine according to the https://vectorized.io/docs/kubernetes-external-connect/ but when I add a subdomain for the kafkaApi there is a validation error. It will show an error that also the pandaproxyAPI needs a subdomain definition and should be the same. Once I do that as shown in the YAML below the pods do not get created. The moment I uncomment them the two pods come up and Redpanda is working as expected.

I am using Cloudflare for the DNS and am pointing kafka.jitsejan.com to the master node of the two nodes in the Kubernetes cluster.

How can I make the subdomains work? Is this something wrong with the DNS configuration?

---
apiVersion: redpanda.vectorized.io/v1alpha1
kind: Cluster
metadata:
  name: external-connectivity
  namespace: redpanda
spec:
  image: vectorized/redpanda
  version: latest
  replicas: 2
  resources:
    requests:
      cpu: 1
      memory: 2Gi
    limits:
      cpu: 1
      memory: 2Gi
  configuration:
    rpcServer:
      port: 33145
    kafkaApi:
      - port: 9092
      - external:
          enabled: true
          # subdomain: kafka.jitsejan.com
    pandaproxyApi:
      - port: 8082
      - external:
          enabled: true
          # subdomain: kafka.jitsejan.com
    adminApi:
      - port: 9644
    developerMode: true
---
RafalKorepta commented 3 years ago

Could you:

jitsejan commented 3 years ago

Does this help?

❯ kc get all -n redpanda -o wide
NAME                                              READY   STATUS    RESTARTS   AGE   IP            NODE                  NOMINATED NODE   READINESS GATES
pod/redpanda-redpanda-operator-68d584646c-7t2v7   2/2     Running   0          21s   10.42.0.109   node01.jitsejan.com   <none>           <none>

NAME                                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE   SELECTOR
service/redpanda-operator-metrics-service   ClusterIP   10.43.65.128    <none>        8443/TCP                        21s   app.kubernetes.io/instance=redpanda,app.kubernetes.io/name=redpanda-operator
service/redpanda-operator-webhook-service   ClusterIP   10.43.23.101    <none>        443/TCP                         21s   app.kubernetes.io/instance=redpanda,app.kubernetes.io/name=redpanda-operator
service/external-connectivity               ClusterIP   None            <none>        9644/TCP,9092/TCP,8082/TCP      9s    app.kubernetes.io/component=redpanda,app.kubernetes.io/instance=external-connectivity,app.kubernetes.io/name=redpanda
service/external-connectivity-cluster       ClusterIP   10.43.20.203    <none>        8083/TCP                        9s    app.kubernetes.io/component=redpanda,app.kubernetes.io/instance=external-connectivity,app.kubernetes.io/name=redpanda
service/external-connectivity-external      NodePort    10.43.200.229   <none>        9093:32153/TCP,8083:30112/TCP   9s    <none>

NAME                                         READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS                IMAGES                                                                           SELECTOR
deployment.apps/redpanda-redpanda-operator   1/1     1            1           21s   kube-rbac-proxy,manager   gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0,vectorized/redpanda-operator:v21.7.3   app.kubernetes.io/instance=redpanda,app.kubernetes.io/name=redpanda-operator

NAME                                                    DESIRED   CURRENT   READY   AGE   CONTAINERS                IMAGES                                                                           SELECTOR
replicaset.apps/redpanda-redpanda-operator-68d584646c   1         1         1       21s   kube-rbac-proxy,manager   gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0,vectorized/redpanda-operator:v21.7.3   app.kubernetes.io/instance=redpanda,app.kubernetes.io/name=redpanda-operator,pod-template-hash=68d584646c
❯ kc logs redpanda-redpanda-operator-68d584646c-7t2v7 -n redpanda
error: a container name must be specified for pod redpanda-redpanda-operator-68d584646c-7t2v7, choose one of: [kube-rbac-proxy manager]
❯ kc describe sts -n redpanda
No resources found in redpanda namespace.
RafalKorepta commented 3 years ago

Yes, it helps. I see that services are created but stateful set is not. If you have the environment still working it would be good to run:

kc logs redpanda-redpanda-operator-68d584646c-7t2v7 -n redpanda -c manager

The kubectl need to know which container stdout you are interested in.

jitsejan commented 3 years ago

That returned a long list with a repeated error I think.

2021-07-21T20:15:42.860Z    ERROR   controller-runtime.manager.controller.cluster   Reconciler error    {"reconciler group": "redpanda.vectorized.io", "reconciler kind": "Cluster", "name": "external-connectivity", "namespace": "redpanda", "error": "unable to create Ingress resource: ingresses.networking.k8s.io is forbidden: User \"system:serviceaccount:redpanda:redpanda-redpanda-operator\" cannot create resource \"ingresses\" in API group \"networking.k8s.io\" in the namespace \"redpanda\""}

I will try now to give permission to system:serviceaccount:redpanda:redpanda-redpanda-operator to make an ingress.

jitsejan commented 3 years ago

By applying the following manifest to the cluster it looks like the pods come up.

---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: redpanda
  name: ingress-writer
rules:
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: create-ingress
  namespace: redpanda
subjects:
 - kind: ServiceAccount
   name: redpanda-redpanda-operator
   namespace: redpanda
roleRef:
  kind: Role
  name: ingress-writer
  apiGroup: rbac.authorization.k8s.io
---

This shows this redpanda.yaml:

❯ kc exec external-connectivity-0 -n redpanda -- cat /etc/redpanda/redpanda.yaml
Defaulted container "redpanda" out of: redpanda, redpanda-configurator (init)
config_file: /etc/redpanda/redpanda.yaml
pandaproxy:
  advertised_pandaproxy_api:
  - address: external-connectivity-0.external-connectivity.redpanda.svc.cluster.local.
    name: proxy
    port: 8082
  - address: 0.kafka.jitsejan.com
    name: proxy-external
    port: 30810
  pandaproxy_api:
  - address: 0.0.0.0
    name: proxy
    port: 8082
  - address: 0.0.0.0
    name: proxy-external
    port: 8083
pandaproxy_client:
  brokers:
  - address: external-connectivity-0.external-connectivity.redpanda.svc.cluster.local.
    port: 9092
  - address: external-connectivity-1.external-connectivity.redpanda.svc.cluster.local.
    port: 9092
redpanda:
  admin:
  - address: 0.0.0.0
    name: admin
    port: 9644
  advertised_kafka_api:
  - address: external-connectivity-0.external-connectivity.redpanda.svc.cluster.local.
    name: kafka
    port: 9092
  - address: 0.kafka.jitsejan.com
    name: kafka-external
    port: 30177
  advertised_rpc_api:
    address: external-connectivity-0.external-connectivity.redpanda.svc.cluster.local.
    port: 33145
  auto_create_topics_enabled: false
  data_directory: /var/lib/redpanda/data
  developer_mode: true
  enable_idempotence: true
  enable_transactions: true
  kafka_api:
  - address: 0.0.0.0
    name: kafka
    port: 9092
  - address: 0.0.0.0
    name: kafka-external
    port: 9093
  log_segment_size: 536870912
  node_id: 0
  rpc_server:
    address: 0.0.0.0
    port: 33145
  seed_servers: []
rpk:
  coredump_dir: /var/lib/redpanda/coredump
  enable_memory_locking: false
  enable_usage_stats: false
  overprovisioned: true
  tune_aio_events: false
  tune_clocksource: false
  tune_coredump: false
  tune_cpu: false
  tune_disk_irq: false
  tune_disk_nomerges: false
  tune_disk_scheduler: false
  tune_disk_write_cache: false
  tune_fstrim: false
  tune_network: false
  tune_swappiness: false
  tune_transparent_hugepages: false
schema_registry: {}

And this:

❯ kubectl get clusters external-connectivity -n redpanda -o json
{
    "apiVersion": "redpanda.vectorized.io/v1alpha1",
    "kind": "Cluster",
    "metadata": {
        "creationTimestamp": "2021-07-22T00:41:53Z",
        "generation": 1,
        "name": "external-connectivity",
        "namespace": "redpanda",
        "resourceVersion": "3900865",
        "selfLink": "/apis/redpanda.vectorized.io/v1alpha1/namespaces/redpanda/clusters/external-connectivity",
        "uid": "06fd1e7d-48be-407b-a8ee-2e78db13e051"
    },
    "spec": {
        "cloudStorage": {
            "enabled": false,
            "secretKeyRef": {}
        },
        "configuration": {
            "adminApi": [
                {
                    "external": {},
                    "port": 9644,
                    "tls": {}
                }
            ],
            "developerMode": true,
            "kafkaApi": [
                {
                    "external": {},
                    "port": 9092,
                    "tls": {}
                },
                {
                    "external": {
                        "enabled": true,
                        "subdomain": "kafka.jitsejan.com"
                    },
                    "tls": {}
                }
            ],
            "pandaproxyApi": [
                {
                    "external": {},
                    "port": 8082,
                    "tls": {}
                },
                {
                    "external": {
                        "enabled": true,
                        "subdomain": "kafka.jitsejan.com"
                    },
                    "tls": {}
                }
            ],
            "rpcServer": {
                "port": 33145
            }
        },
        "image": "vectorized/redpanda",
        "replicas": 2,
        "resources": {
            "limits": {
                "cpu": "1",
                "memory": "2Gi"
            },
            "requests": {
                "cpu": "1",
                "memory": "2Gi"
            }
        },
        "storage": {
            "capacity": "0"
        },
        "version": "latest"
    },
    "status": {
        "nodes": {
            "pandaproxyIngress": "kafka.jitsejan.com"
        },
        "replicas": 0,
        "upgrading": false
    }
}

Does that look right to you? Can I use kafka.jitsejan.com as broker instead of <ip>:<port>?

jitsejan commented 3 years ago

Not sure if this is relevant:

Deployment

❯ kc get all -n redpanda -o wide
NAME                                              READY   STATUS    RESTARTS   AGE   IP            NODE                  NOMINATED NODE   READINESS GATES
pod/redpanda-redpanda-operator-68d584646c-x9s8j   2/2     Running   0          68m   10.42.0.110   node01.jitsejan.com   <none>           <none>
pod/external-connectivity-1                       1/1     Running   0          67m   10.42.0.112   node01.jitsejan.com   <none>           <none>
pod/external-connectivity-0                       1/1     Running   0          67m   10.42.1.102   node02.jitsejan.com   <none>           <none>

NAME                                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                         AGE   SELECTOR
service/redpanda-operator-metrics-service   ClusterIP   10.43.83.203    <none>        8443/TCP                        68m   app.kubernetes.io/instance=redpanda,app.kubernetes.io/name=redpanda-operator
service/redpanda-operator-webhook-service   ClusterIP   10.43.180.216   <none>        443/TCP                         68m   app.kubernetes.io/instance=redpanda,app.kubernetes.io/name=redpanda-operator
service/external-connectivity               ClusterIP   None            <none>        9644/TCP,9092/TCP,8082/TCP      67m   app.kubernetes.io/component=redpanda,app.kubernetes.io/instance=external-connectivity,app.kubernetes.io/name=redpanda
service/external-connectivity-cluster       ClusterIP   10.43.86.221    <none>        8083/TCP                        67m   app.kubernetes.io/component=redpanda,app.kubernetes.io/instance=external-connectivity,app.kubernetes.io/name=redpanda
service/external-connectivity-external      NodePort    10.43.241.225   <none>        9093:30177/TCP,8083:30810/TCP   67m   <none>

NAME                                         READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS                IMAGES                                                                           SELECTOR
deployment.apps/redpanda-redpanda-operator   1/1     1            1           68m   kube-rbac-proxy,manager   gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0,vectorized/redpanda-operator:v21.7.3   app.kubernetes.io/instance=redpanda,app.kubernetes.io/name=redpanda-operator

NAME                                                    DESIRED   CURRENT   READY   AGE   CONTAINERS                IMAGES                                                                           SELECTOR
replicaset.apps/redpanda-redpanda-operator-68d584646c   1         1         1       68m   kube-rbac-proxy,manager   gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0,vectorized/redpanda-operator:v21.7.3   app.kubernetes.io/instance=redpanda,app.kubernetes.io/name=redpanda-operator,pod-template-hash=68d584646c

NAME                                     READY   AGE   CONTAINERS   IMAGES
statefulset.apps/external-connectivity   2/2     67m   redpanda     vectorized/redpanda:latest

Ingress

❯ kc get ingress -n redpanda
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
NAME                    CLASS   HOSTS                ADDRESS          PORTS   AGE
external-connectivity   nginx   kafka.jitsejan.com   89.233.107.140   80      30m

Cluster

❯ kubectl describe cluster external-connectivity -n redpanda
Name:         external-connectivity
Namespace:    redpanda
Labels:       <none>
Annotations:  <none>
API Version:  redpanda.vectorized.io/v1alpha1
Kind:         Cluster
Metadata:
  Creation Timestamp:  2021-07-22T00:41:53Z
  Generation:          1
  Managed Fields:
    API Version:  redpanda.vectorized.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:spec:
        .:
        f:configuration:
          .:
          f:adminApi:
          f:developerMode:
          f:kafkaApi:
          f:pandaproxyApi:
          f:rpcServer:
            .:
            f:port:
        f:image:
        f:replicas:
        f:resources:
          .:
          f:limits:
            .:
            f:cpu:
            f:memory:
          f:requests:
            .:
            f:cpu:
            f:memory:
        f:version:
    Manager:      OpenAPI-Generator
    Operation:    Update
    Time:         2021-07-22T00:41:53Z
    API Version:  redpanda.vectorized.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:status:
        .:
        f:nodes:
          .:
          f:pandaproxyIngress:
        f:replicas:
        f:upgrading:
    Manager:         manager
    Operation:       Update
    Time:            2021-07-22T00:41:53Z
  Resource Version:  3900865
  Self Link:         /apis/redpanda.vectorized.io/v1alpha1/namespaces/redpanda/clusters/external-connectivity
  UID:               06fd1e7d-48be-407b-a8ee-2e78db13e051
Spec:
  Cloud Storage:
    Enabled:  false
    Secret Key Ref:
  Configuration:
    Admin API:
      External:
      Port:  9644
      Tls:
    Developer Mode:  true
    Kafka API:
      External:
      Port:  9092
      Tls:
      External:
        Enabled:    true
        Subdomain:  kafka.jitsejan.com
      Tls:
    Pandaproxy API:
      External:
      Port:  8082
      Tls:
      External:
        Enabled:    true
        Subdomain:  kafka.jitsejan.com
      Tls:
    Rpc Server:
      Port:  33145
  Image:     vectorized/redpanda
  Replicas:  2
  Resources:
    Limits:
      Cpu:     1
      Memory:  2Gi
    Requests:
      Cpu:     1
      Memory:  2Gi
  Storage:
    Capacity:  0
  Version:     latest
Status:
  Nodes:
    Pandaproxy Ingress:  kafka.jitsejan.com
  Replicas:              0
  Upgrading:             false
Events:                  <none>
RafalKorepta commented 3 years ago

Mentioned ingress kafka.jitsejan.com is pointing to

service/external-connectivity-cluster       ClusterIP   10.43.86.221    <none>        8083/TCP

This is panda proxy port:

pandaproxy:
  advertised_pandaproxy_api:
  - address: external-connectivity-0.external-connectivity.redpanda.svc.cluster.local.
    name: proxy
    port: 8082
  - address: 0.kafka.jitsejan.com
    name: proxy-external
    port: 30810
  pandaproxy_api:
  - address: 0.0.0.0
    name: proxy
    port: 8082
  - address: 0.0.0.0
    name: proxy-external
    port: 8083 <---------------------- external port

We didn't add kafka ports to this service. If you would like to connect to your brokers the following commands should be enough:

$ export BROKERS=`kubectl get clusters external-connectivity -o=jsonpath='{.status.nodes.external}'  | jq -r 'join(",")'`
$ rpk --brokers $BROKERS cluster info

I'm not sure why your cluster custom resource doesn't report any nodes:

Status:
  Nodes:
    Pandaproxy Ingress:  kafka.jitsejan.com
  Replicas:              0
  Upgrading:             false

Please remember that the DNS 0.kafka.jitsejan.com must point to public IP (IPv4 or IPv6) of a node on which the Redpanda POD is scheduled.

jitsejan commented 3 years ago

My DNS has a link to node01 and node02 for the two Kafka nodes.

image

Are you implying that only through the Panda Proxy I can make this work but not with the Kafka brokers?

RafalKorepta commented 3 years ago

Ingress adds additional latency, because requests needs to go though cloud load balancer then in ingress controller e.g. nginx and last kube-proxy (iptables) if requests hit wrong k8s node.

That said we try to compromise for usability and extensibility in our Redpanda operator. The preferable path is to have different mean of registering public IPs of a nodes where the Redpanda PODs are scheduled. We start with the ingress for Panda proxy only.

I'm not sure if you can share the output of kubectl describe node node01. That would give an overview what is the public IP and if Redpadna POD is scheduled on that node.

Regardless of the dns names you still need to provide the port in the bootstrap configuration.

Did you try to connect to your Redpanda cluster using rpk?

jitsejan commented 3 years ago

Apologies for the delay in response. This is the description for node01.

❯ kc describe node node01
Name:               node01.jitsejan.com
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/instance-type=k3s
                    beta.kubernetes.io/os=linux
                    k3s.io/external-ip=63.250.53.137
                    k3s.io/hostname=node01.jitsejan.com
                    k3s.io/internal-ip=63.250.53.137
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=node01.jitsejan.com
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=true
                    node.kubernetes.io/instance-type=k3s
Annotations:        flannel.alpha.coreos.com/backend-data: {"VtepMAC":"5e:83:10:50:78:41"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 63.250.53.137
                    k3s.io/node-args: ["server","--tls-san","63.250.53.137","--node-external-ip","63.250.53.137"]
                    k3s.io/node-config-hash: RVAL2KUSRMUEHAU773AXFEYXCCTLCU4FOVLLRJJVGZOX2TG73NXA====
                    k3s.io/node-env: {"K3S_DATA_DIR":"/var/lib/rancher/k3s/data/3eb51c677aeaaba13aa66d747ecdfc87d61227a389aa67c261395eb4178d4085"}
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Thu, 08 Jul 2021 22:37:09 +0100
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  node01.jitsejan.com
  AcquireTime:     <unset>
  RenewTime:       Mon, 02 Aug 2021 10:26:20 +0100
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Mon, 19 Jul 2021 00:51:42 +0100   Mon, 19 Jul 2021 00:51:42 +0100   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Mon, 02 Aug 2021 10:22:42 +0100   Thu, 08 Jul 2021 22:37:09 +0100   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Mon, 02 Aug 2021 10:22:42 +0100   Thu, 08 Jul 2021 22:37:09 +0100   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Mon, 02 Aug 2021 10:22:42 +0100   Thu, 08 Jul 2021 22:37:09 +0100   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Mon, 02 Aug 2021 10:22:42 +0100   Mon, 19 Jul 2021 00:51:47 +0100   KubeletReady                 kubelet is posting ready status. AppArmor enabled
Addresses:
  InternalIP:  63.250.53.137
  ExternalIP:  63.250.53.137
  Hostname:    node01.jitsejan.com
Capacity:
  cpu:                8
  ephemeral-storage:  495348248Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             32887568Ki
  pods:               110
Allocatable:
  cpu:                8
  ephemeral-storage:  481874775277
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             32887568Ki
  pods:               110
System Info:
  Machine ID:                 76bb53ac573c0ee863b78268dad3ac82
  System UUID:                3bcfdb64-706b-4ef7-89b5-e35939fa1d0b
  Boot ID:                    b10a5aba-3e62-4c17-8a5e-a92ce85ad7ca
  Kernel Version:             5.4.0-70-generic
  OS Image:                   Ubuntu 20.04.2 LTS
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  containerd://1.4.4-k3s1
  Kubelet Version:            v1.19.12+k3s1
  Kube-Proxy Version:         v1.19.12+k3s1
PodCIDR:                      10.42.0.0/24
PodCIDRs:                     10.42.0.0/24
ProviderID:                   k3s://node01.jitsejan.com
Non-terminated Pods:          (21 in total)
  Namespace                   Name                                                               CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                                                               ------------  ----------  ---------------  -------------  ---
  kube-system                 metrics-server-7b4f8b595-qbqdh                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24d
  kube-system                 local-path-provisioner-7ff9579c6-gsqkv                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24d
  kube-system                 svclb-traefik-sptmh                                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         24d
  default                     echo-2-deployment-86db7ddf88-qhpm4                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         24d
  default                     echo-1-deployment-65875f658-49vz4                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24d
  consul                      consul-connect-injector-webhook-deployment-5468d7f568-bbkdz        50m (0%)      50m (0%)    50Mi (0%)        50Mi (0%)      19d
  dagster                     dagster-dagster-user-deployments-dagster-user-code-98f6c9dfsgv6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18d
  dagster                     dagster-run-d25c9c50-203a-4b65-8cfa-5a6bf58e7440-wnckn             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18d
  dagster                     dagster-celery-workers-dagster-98d9dc54-6f2gk                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         22d
  kube-system                 traefik-5dd496474-n5b2r                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24d
  kube-system                 coredns-66c464876b-bjcdl                                           100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24d
  minio                       minio-deployment-65c85d5cc4-qft45                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         23d
  dagster                     dagster-rabbitmq-0                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18d
  jupyter                     jupyter-notebook-57cc6bbf5c-g4lqd                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d
  consul                      consul-controller-dff49c9f4-5q86q                                  100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19d
  redpanda                    redpanda-redpanda-operator-68d584646c-x9s8j                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d
  redpanda                    external-connectivity-1                                            1 (12%)       1 (12%)     2Gi (6%)         2Gi (6%)       11d
  external-dns                nginx-7848d4b86f-bj96z                                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11d
  vault                       vault-2                                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         19d
  vault                       vault-0                                                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         19d
  consul                      consul-x94qb                                                       100m (1%)     100m (1%)   100Mi (0%)       100Mi (0%)     19d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests     Limits
  --------           --------     ------
  cpu                1350m (16%)  1250m (15%)
  memory             2318Mi (7%)  2418Mi (7%)
  ephemeral-storage  0 (0%)       0 (0%)
  hugepages-1Gi      0 (0%)       0 (0%)
  hugepages-2Mi      0 (0%)       0 (0%)
Events:              <none>

Does that help?

So what you are implying is that it is not smart to put an ingress before the service to make sure there is no additional delay in the communication.

❯ kubectl get clusters external-connectivity -n redpanda -o=jsonpath='{.status.nodes.external}'  | jq -r 'join(",")'
89.233.107.140:31170,63.250.53.137:31170
❯ rpk --brokers 89.233.107.140:31170,63.050.53.137:31170 topic list
Couldn't initialize API admin
Error: couldn't connect to redpanda at 89.233.107.140:31170, 63.050.53.137:31170. Try using --brokers to specify other brokers to connect to.
jitsejan commented 3 years ago

Running it again now seems to work:

❯ rpk --brokers 89.233.107.140:31170,63.050.53.137:31170 topic list
  Name        Partitions  Replicas
  test_topic  4           1

I will for now drop the requirement for the subdomain because it indeed has no real value and will slow down things.

parkerjm commented 2 years ago

the original error is fixed by #3101

not sure if we should be avoiding ingresses here, but by default, one is created on k8s.