envoyproxy / gateway

Manages Envoy Proxy as a Standalone or Kubernetes-based Application Gateway
https://gateway.envoyproxy.io
Apache License 2.0
1.55k stars 335 forks source link

eg cannot be exposed application outside the Kubernetes cluster. #3643

Closed JaeGerW2016 closed 1 month ago

JaeGerW2016 commented 3 months ago

Description: When using Cilium's L2 Announcement with ARP and migrate metallb to cilium CNI LoadBalancer mode, applications cannot be exposed outside the Kubernetes cluster. The applications can only be accessed within Kubernetes cluster nodes, but not from external devices.

Describe the issue.

Environment Information

Steps to Reproduce

  1. Kubernetes cluster.
root@node1:~# kubectl version 
Client Version: v1.29.5
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.29.5

root@node1:~# kubectl get node -owide
NAME    STATUS   ROLES           AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                         KERNEL-VERSION   CONTAINER-RUNTIME
node1   Ready    control-plane   8d    v1.29.5   192.168.2.220   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-13-amd64   containerd://1.7.16
node2   Ready    <none>          8d    v1.29.5   192.168.2.243   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-13-amd64   containerd://1.7.16
node3   Ready    <none>          8d    v1.29.5   192.168.2.222   <none>        Debian GNU/Linux 12 (bookworm)   6.1.0-13-amd64   containerd://1.7.16
  1. Install Cilium and enable the L2 Announcement with ARP feature.
root@node1:~# cilium status 
    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    disabled (using embedded mode)
 \__/¯¯\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        disabled

Deployment             cilium-operator    Desired: 2, Ready: 2/2, Available: 2/2
DaemonSet              cilium             Desired: 3, Ready: 3/3, Available: 3/3
Containers:            cilium             Running: 3
                       cilium-operator    Running: 2
Cluster Pods:          9/9 managed by Cilium
Helm chart version:    
Image versions         cilium             quay.io/cilium/cilium:v1.15.6: 3
                       cilium-operator    quay.io/cilium/operator:v1.15.6: 2

## enable cilium l2 Announcement config 

root@node1:~# cilium config view | grep "enable-l2-announcements"
enable-l2-announcements         True

CiliumL2AnnouncementPolicy.yaml

apiVersion: "cilium.io/v2alpha1"
kind: CiliumL2AnnouncementPolicy
metadata:
  name: default-l2policy
  namespace: kube-system
spec:
  nodeSelector:
    matchExpressions:
      - key: node-role.kubernetes.io/control-plane
        operator: DoesNotExist
  interfaces:
  - ens33
  - enp2s1
  externalIPs: true
  loadBalancerIPs: true

lb-IPPool.yaml

root@node1:/opt/cilium-l2-Aware-LB# cat lb-IPPool.yaml 
 apiVersion: "cilium.io/v2alpha1"
 kind: CiliumLoadBalancerIPPool
 metadata:
     name: "lb-pool"
     namespace: kube-system
 spec:
     cidrs:
     - cidr: "192.168.2.128/28"
  1. Node Operatoring System and Kernel version
root@node1:~# cat /etc/debian_version 
12.2
root@node1:~# uname -rsa
Linux node1 6.1.0-13-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.55-1 (2023-09-29) x86_64 GNU/Linux
  1. Configure Envoy Gateway to expose the application.

the-moon-all-in-one-with-envoy-gateway.yaml

---
apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: envoy-gateway
spec:
  controllerName: gateway.envoyproxy.io/gatewayclass-controller
---
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: envoy-gateway
spec:
  gatewayClassName: envoy-gateway
  listeners:
    - name: http
      protocol: HTTP
      port: 80
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: moon-svc
spec:
  parentRefs:
    - name: envoy-gateway
  rules:
    - matches:
      - path:
          type: PathPrefix
          value: /
      backendRefs:
        - name: moon-svc
          port: 80

---
apiVersion: v1
kind: Service
metadata:
  name: moon-lb-svc
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
      name: http
  selector:
    app: moon
---
apiVersion: v1
kind: Service
metadata:
  name: moon-svc
  labels:
    app: moon
spec:
  # clusterIP: None
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
      name: http
  selector:
    app: moon
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: moon
spec:
  replicas: 1
  selector:
    matchLabels:
      app: moon
  template:
    metadata:
      labels:
        app: moon
    spec:
      containers:
        - name: moon
          image: armsultan/solar-system:moon-nonroot
          imagePullPolicy: Always
          # resources:
          #   limits:
          #     cpu: "1"
          #     memory: "200Mi"
          #   requests:
          #     cpu: "0.5"
          #     memory: "100Mi"
          ports:
            - containerPort: 8080
root@node1:/opt/cilium-l2-Aware-LB# kubectl get pod,svc -n default
NAME                        READY   STATUS    RESTARTS   AGE
pod/moon-7bcd85b4c4-t5xvf   1/1     Running   0          21h

NAME                  TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
service/kubernetes    ClusterIP      10.233.0.1      <none>          443/TCP        8d
service/moon-lb-svc   LoadBalancer   10.233.16.153   192.168.2.130   80:31336/TCP   21h
service/moon-svc      ClusterIP      10.233.56.217   <none>          80/TCP         21h

root@node1:/opt/cilium-l2-Aware-LB# kubectl get pod,svc -n envoy-gateway-system
NAME                                                      READY   STATUS    RESTARTS   AGE
pod/envoy-default-envoy-gateway-12b6bb46-cf6dfb77-ksbsq   2/2     Running   0          21h
pod/envoy-gateway-67844c8844-btwn7                        1/1     Running   0          3d18h

NAME                                           TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)               AGE
service/envoy-default-envoy-gateway-12b6bb46   LoadBalancer   10.233.4.1      192.168.2.132   80:31895/TCP          21h
service/envoy-gateway                          ClusterIP      10.233.48.24    <none>          18000/TCP,18001/TCP   3d18h
service/envoy-gateway-metrics-service          ClusterIP      10.233.46.160   <none>          19001/TCP             3d18h
export ENVOY_SERVICE=$(kubectl get svc -n envoy-gateway-system --selector=gateway.envoyproxy.io/owning-gateway-name=envoy-gateway,gateway.envoyproxy.io/owning-gateway-namespace=default -o jsonpath='{.items[0].metadata.name}')

export GATEWAY_HOST=$(kubectl get svc/${ENVOY_SERVICE} -n envoy-gateway-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
  1. Attempt to access the application from outside the Kubernetes Cluster.
# curl with ip outside the kubernetes cluster  which device ip 192.168.2.209

root@debian:~# ip address | grep ens33
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    inet 192.168.2.209/24 brd 192.168.2.255 scope global dynamic ens33
root@debian:~# curl http://${GATEWAY_HOST}
curl: (7) Failed to connect to 192.168.2.132 port 80 after 3 ms: Couldn't connect to server
# curl with node1 inside the Kubernetes Cluster 

root@node1:~# curl --verbose -sIL -w "%{http_code}\n" -o /dev/null http://${GATEWAY_HOST}
*   Trying 192.168.2.132:80...
* Connected to 192.168.2.132 (192.168.2.132) port 80 (#0)
> HEAD / HTTP/1.1
> Host: 192.168.2.132
> User-Agent: curl/7.88.1
> Accept: */*
> 
< HTTP/1.1 200 OK
< server: nginx/1.21.6
< date: Fri, 21 Jun 2024 03:06:34 GMT
< content-type: text/html
< expires: Fri, 21 Jun 2024 03:06:33 GMT
< cache-control: no-cache
< transfer-encoding: chunked
< 
* Connection #0 to host 192.168.2.132 left intact
200

# curl with ip outside the kubernetes cluster  which device ip 192.168.2.209

export MOON_LB_HOST=$(kubectl get svc/moon-lb-svc -o jsonpath='{.status.loadBalancer.ingress[0].ip}')

root@debian:~# curl --verbose -sIL -w "%{http_code}\n" http://{MOON_LB_HOST}
*   Trying 192.168.2.130:80...
* Connected to 192.168.2.130 (192.168.2.130) port 80 (#0)
> HEAD / HTTP/1.1
> Host: 192.168.2.130
> User-Agent: curl/7.88.1
> Accept: */*
> 
< HTTP/1.1 200 OK
HTTP/1.1 200 OK
< Server: nginx/1.21.6
Server: nginx/1.21.6
< Date: Fri, 21 Jun 2024 03:13:06 GMT
Date: Fri, 21 Jun 2024 03:13:06 GMT
< Content-Type: text/html
Content-Type: text/html
< Connection: keep-alive
Connection: keep-alive
< Expires: Fri, 21 Jun 2024 03:13:05 GMT
Expires: Fri, 21 Jun 2024 03:13:05 GMT
< Cache-Control: no-cache
Cache-Control: no-cache

< 
* Connection #0 to host 192.168.2.130 left intact
200

Expected Behavior

The application should be accessible from devices outside the Kubernetes cluster.

Actual Behavior

The application can only be accessed within Kubernetes cluster nodes, and external devices cannot access it.

Configuration Files and Logs

cilium config

agent-health-port               9879
auto-direct-node-routes         False
bpf-ct-global-any-max           262144
bpf-ct-global-tcp-max           524288
bpf-lb-mode                     snat
bpf-map-dynamic-size-ratio      0.0025
cgroup-root                     /run/cilium/cgroupv2
clean-cilium-bpf-state          false
clean-cilium-state              false
cluster-name                    default
cluster-pool-ipv4-cidr          10.233.64.0/18
cluster-pool-ipv4-mask-size     24
cni-exclusive                   True
cni-log-file                    /var/run/cilium/cilium-cni.log
custom-cni-conf                 false
debug                           False
disable-cnp-status-updates      True
enable-bpf-clock-probe          True
enable-bpf-masquerade           False
enable-host-legacy-routing      True
enable-ip-masq-agent            False
enable-ipv4                     True
enable-ipv4-masquerade          True
enable-ipv6                     False
enable-ipv6-masquerade          True
enable-l2-announcements         True
enable-remote-node-identity     True
enable-well-known-identities    False
etcd-config                     ---
endpoints:
  - https://192.168.2.220:2379

ca-file: "/etc/cilium/certs/ca_cert.crt"

key-file: "/etc/cilium/certs/key.pem"
cert-file: "/etc/cilium/certs/cert.crt"
identity-allocation-mode     kvstore
ipam                         cluster-pool
k8s-client-burst             100
k8s-client-qps               50
kube-proxy-replacement       strict
kvstore                      etcd
kvstore-opt                  {"etcd.config": "/var/lib/etcd-config/etcd.config"}
monitor-aggregation          medium
monitor-aggregation-flags    all
operator-api-serve-addr      127.0.0.1:9234
preallocate-bpf-maps         False
routing-mode                 tunnel
sidecar-istio-proxy-image    cilium/istio_proxy
tunnel-protocol              vxlan
write-cni-conf-when-ready    /host/etc/cni/net.d/05-cilium.conflist

envoy configuration

root@node1:~# kubectl get cm envoy-gateway-config -n envoy-gateway-system -oyaml 
apiVersion: v1
data:
  envoy-gateway.yaml: |
    apiVersion: gateway.envoyproxy.io/v1alpha1
    kind: EnvoyGateway
    gateway:
      controllerName: gateway.envoyproxy.io/gatewayclass-controller
    logging:
      level:
        default: info
    provider:
      type: Kubernetes
kind: ConfigMap
metadata:
  creationTimestamp: "2024-06-17T08:05:09Z"
  labels:
    app.kubernetes.io/instance: eg
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: gateway-helm
    app.kubernetes.io/version: latest
    helm.sh/chart: gateway-helm-v0.0.0-latest
  name: envoy-gateway-config
  namespace: envoy-gateway-system
  resourceVersion: "1416276"
  uid: 10b6988e-73bf-426c-b165-694c56a0d93a

root@node1:~# kubectl get cm envoy-default-envoy-gateway-12b6bb46 -n envoy-gateway-system -oyaml 
apiVersion: v1
data:
  xds-certificate.json: '{"resources":[{"@type":"type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret","name":"xds_certificate","tls_certificate":{"certificate_chain":{"filename":"/certs/tls.crt"},"private_key":{"filename":"/certs/tls.key"}}}]}'
  xds-trusted-ca.json: '{"resources":[{"@type":"type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.Secret","name":"xds_trusted_ca","validation_context":{"trusted_ca":{"filename":"/certs/ca.crt"},"match_typed_subject_alt_names":[{"san_type":"DNS","matcher":{"exact":"envoy-gateway"}}]}}]}'
kind: ConfigMap
metadata:
  creationTimestamp: "2024-06-20T05:14:11Z"
  labels:
    app.kubernetes.io/component: proxy
    app.kubernetes.io/managed-by: envoy-gateway
    app.kubernetes.io/name: envoy
    gateway.envoyproxy.io/owning-gateway-name: envoy-gateway
    gateway.envoyproxy.io/owning-gateway-namespace: default
  name: envoy-default-envoy-gateway-12b6bb46
  namespace: envoy-gateway-system
  resourceVersion: "2413043"
  uid: a8dbddb5-7494-46e6-9010-a09fea160fe2

envoy-gateway logs

root@node1:~# kubectl logs envoy-default-envoy-gateway-12b6bb46-cf6dfb77-ksbsq -n envoy-gateway-system 
Defaulted container "envoy" out of: envoy, shutdown-manager
[2024-06-20 05:14:13.338][1][warning][main] [source/server/server.cc:910] There is no configured limit to the number of allowed active downstream connections. Configure a limit in `envoy.resource_monitors.downstream_connections` resource monitor.
{"start_time":"2024-06-21T01:32:10.407Z","method":"GET","x-envoy-origin-path":"/","protocol":"HTTP/1.1","response_code":"200","response_flags":"-","response_code_details":"via_upstream","connection_termination_details":"-","upstream_transport_failure_reason":"-","bytes_received":"0","bytes_sent":"194711","duration":"71","x-envoy-upstream-service-time":"-","x-forwarded-for":"10.233.64.190","user-agent":"curl/7.88.1","x-request-id":"5c934668-8c15-48bc-8527-2d2534438ebe",":authority":"192.168.2.132","upstream_host":"10.233.65.168:8080","upstream_cluster":"httproute/default/moon-svc/rule/0","upstream_local_address":"10.233.64.79:59712","downstream_local_address":"10.233.64.79:10080","downstream_remote_address":"10.233.64.190:52808","requested_server_name":"-","route_name":"httproute/default/moon-svc/rule/0/match/0/*"}
{"start_time":"2024-06-21T02:50:12.061Z","method":"HEAD","x-envoy-origin-path":"/","protocol":"HTTP/1.1","response_code":"200","response_flags":"-","response_code_details":"via_upstream","connection_termination_details":"-","upstream_transport_failure_reason":"-","bytes_received":"0","bytes_sent":"0","duration":"9","x-envoy-upstream-service-time":"-","x-forwarded-for":"10.233.64.190","user-agent":"curl/7.88.1","x-request-id":"4efb4bd3-ed81-4c3c-b8eb-85b42a83e048",":authority":"192.168.2.132","upstream_host":"10.233.65.168:8080","upstream_cluster":"httproute/default/moon-svc/rule/0","upstream_local_address":"10.233.64.79:40316","downstream_local_address":"10.233.64.79:10080","downstream_remote_address":"10.233.64.190:42128","requested_server_name":"-","route_name":"httproute/default/moon-svc/rule/0/match/0/*"}
{"start_time":"2024-06-21T02:50:53.437Z","method":"GET","x-envoy-origin-path":"/","protocol":"HTTP/1.1","response_code":"200","response_flags":"-","response_code_details":"via_upstream","connection_termination_details":"-","upstream_transport_failure_reason":"-","bytes_received":"0","bytes_sent":"194711","duration":"6","x-envoy-upstream-service-time":"-","x-forwarded-for":"10.233.64.190","user-agent":"curl/7.88.1","x-request-id":"d5c90983-a665-4e3a-aa39-0a757685802e",":authority":"192.168.2.132","upstream_host":"10.233.65.168:8080","upstream_cluster":"httproute/default/moon-svc/rule/0","upstream_local_address":"10.233.64.79:53656","downstream_local_address":"10.233.64.79:10080","downstream_remote_address":"10.233.64.190:43888","requested_server_name":"-","route_name":"httproute/default/moon-svc/rule/0/match/0/*"}
{"start_time":"2024-06-21T03:01:28.178Z","method":"GET","x-envoy-origin-path":"/","protocol":"HTTP/1.1","response_code":"200","response_flags":"-","response_code_details":"via_upstream","connection_termination_details":"-","upstream_transport_failure_reason":"-","bytes_received":"0","bytes_sent":"194711","duration":"12","x-envoy-upstream-service-time":"-","x-forwarded-for":"10.233.64.190","user-agent":"curl/7.88.1","x-request-id":"63fbab30-f887-4b24-9245-a9b3ea44edcb",":authority":"192.168.2.132","upstream_host":"10.233.65.168:8080","upstream_cluster":"httproute/default/moon-svc/rule/0","upstream_local_address":"10.233.64.79:54224","downstream_local_address":"10.233.64.79:10080","downstream_remote_address":"10.233.64.190:41780","requested_server_name":"-","route_name":"httproute/default/moon-svc/rule/0/match/0/*"}
{"start_time":"2024-06-21T03:05:57.141Z","method":"GET","x-envoy-origin-path":"/","protocol":"HTTP/1.1","response_code":"200","response_flags":"-","response_code_details":"via_upstream","connection_termination_details":"-","upstream_transport_failure_reason":"-","bytes_received":"0","bytes_sent":"194711","duration":"4","x-envoy-upstream-service-time":"-","x-forwarded-for":"10.233.64.190","user-agent":"curl/7.88.1","x-request-id":"cf2c358d-9025-4b03-bf3b-26590df6e303",":authority":"192.168.2.132","upstream_host":"10.233.65.168:8080","upstream_cluster":"httproute/default/moon-svc/rule/0","upstream_local_address":"10.233.64.79:39316","downstream_local_address":"10.233.64.79:10080","downstream_remote_address":"10.233.64.190:46288","requested_server_name":"-","route_name":"httproute/default/moon-svc/rule/0/match/0/*"}
{"start_time":"2024-06-21T03:06:23.420Z","method":"HEAD","x-envoy-origin-path":"/","protocol":"HTTP/1.1","response_code":"200","response_flags":"-","response_code_details":"via_upstream","connection_termination_details":"-","upstream_transport_failure_reason":"-","bytes_received":"0","bytes_sent":"0","duration":"1","x-envoy-upstream-service-time":"-","x-forwarded-for":"10.233.64.190","user-agent":"curl/7.88.1","x-request-id":"377a3be0-d0ec-429b-9437-f871813c6033",":authority":"192.168.2.132","upstream_host":"10.233.65.168:8080","upstream_cluster":"httproute/default/moon-svc/rule/0","upstream_local_address":"10.233.64.79:60636","downstream_local_address":"10.233.64.79:10080","downstream_remote_address":"10.233.64.190:35658","requested_server_name":"-","route_name":"httproute/default/moon-svc/rule/0/match/0/*"}
{"start_time":"2024-06-21T03:06:34.506Z","method":"HEAD","x-envoy-origin-path":"/","protocol":"HTTP/1.1","response_code":"200","response_flags":"-","response_code_details":"via_upstream","connection_termination_details":"-","upstream_transport_failure_reason":"-","bytes_received":"0","bytes_sent":"0","duration":"1","x-envoy-upstream-service-time":"-","x-forwarded-for":"10.233.64.190","user-agent":"curl/7.88.1","x-request-id":"ad54a9ea-4b29-4774-bfa6-c4067080fe99",":authority":"192.168.2.132","upstream_host":"10.233.65.168:8080","upstream_cluster":"httproute/default/moon-svc/rule/0","upstream_local_address":"10.233.64.79:60636","downstream_local_address":"10.233.64.79:10080","downstream_remote_address":"10.233.64.190:58366","requested_server_name":"-","route_name":"httproute/default/moon-svc/rule/0/match/0/*"}
JaeGerW2016 commented 3 months ago

cilium lb list

#envoy-default-envoy-gateway-12b6bb46 loadbalancer ip 192.168.2.132

root@node1:~# kubectl exec -it -n kube-system cilium-fdcw7 -- cilium bpf lb list | grep "192.168.2.132"
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), clean-cilium-state (init), install-cni-binaries (init)
192.168.2.132:80 (0)        0.0.0.0:0 (119) (0) [LoadBalancer, Local, two-scopes]              
192.168.2.132:80 (1)        10.233.64.79:10080 (119) (1)                                       
192.168.2.132:80/i (1)      10.233.64.79:10080 (120) (1)                                       
192.168.2.132:80/i (0)      0.0.0.0:0 (120) (0) [LoadBalancer, Local, two-scopes]        

#moon-lb-svc loadbalancer ip 192.168.2.130
root@node1:~# kubectl exec -it -n kube-system cilium-fdcw7 -- cilium bpf lb list | grep "192.168.2.130"
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), clean-cilium-state (init), install-cni-binaries (init)
192.168.2.130:80 (1)        10.233.65.168:8080 (116) (1)                                       
192.168.2.130:80 (0)        0.0.0.0:0 (116) (0) [LoadBalancer]      

#pod/envoy-default-envoy-gateway-12b6bb46-cf6dfb77-ksbsq ip 10.233.64.79
root@node1:~# kubectl exec -it -n kube-system cilium-fdcw7 -- cilium bpf lb list | grep "10.233.64.79"
Defaulted container "cilium-agent" out of: cilium-agent, mount-cgroup (init), apply-sysctl-overwrites (init), clean-cilium-state (init), install-cni-binaries (init)
0.0.0.0:31895 (1)           10.233.64.79:10080 (123) (1)                                       
10.233.4.1:80 (1)           10.233.64.79:10080 (118) (1)                                       
192.168.2.132:80 (1)        10.233.64.79:10080 (119) (1)                                       
0.0.0.0:31895/i (1)         10.233.64.79:10080 (124) (1)                                       
192.168.2.220:31895/i (1)   10.233.64.79:10080 (122) (1)                                       
192.168.2.132:80/i (1)      10.233.64.79:10080 (120) (1)                                       
192.168.2.220:31895 (1)     10.233.64.79:10080 (121) (1)            

Envoy GatewayClass Gateway HTTPRoute Info


root@node1:~# kubectl get gc
NAME            CONTROLLER                                      ACCEPTED   AGE
envoy-gateway   gateway.envoyproxy.io/gatewayclass-controller   True       8d
root@node1:~# kubectl get gateways
NAME            CLASS           ADDRESS         PROGRAMMED   AGE
envoy-gateway   envoy-gateway   192.168.2.132   True         8d
root@node1:~# kubectl get httproutes
NAME       HOSTNAMES   AGE
moon-svc               8d
root@node1:~# kubectl get httproutes moon-svc -oyaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"gateway.networking.k8s.io/v1","kind":"HTTPRoute","metadata":{"annotations":{},"name":"moon-svc","namespace":"default"},"spec":{"parentRefs":[{"name":"envoy-gateway"}],"rules":[{"backendRefs":[{"name":"moon-svc","port":80}],"matches":[{"path":{"type":"PathPrefix","value":"/"}}]}]}}
  creationTimestamp: "2024-06-20T05:14:11Z"
  generation: 1
  name: moon-svc
  namespace: default
  resourceVersion: "2413037"
  uid: cd4338af-b40f-4e34-97d8-f3cbfc837544
spec:
  parentRefs:
  - group: gateway.networking.k8s.io
    kind: Gateway
    name: envoy-gateway
  rules:
  - backendRefs:
    - group: ""
      kind: Service
      name: moon-svc
      port: 80
      weight: 1
    matches:
    - path:
        type: PathPrefix
        value: /
status:
  parents:
  - conditions:
    - lastTransitionTime: "2024-06-20T05:14:11Z"
      message: Route is accepted
      observedGeneration: 1
      reason: Accepted
      status: "True"
      type: Accepted
    - lastTransitionTime: "2024-06-20T05:14:11Z"
      message: Resolved all the Object references for the Route
      observedGeneration: 1
      reason: ResolvedRefs
      status: "True"
      type: ResolvedRefs
    controllerName: gateway.envoyproxy.io/gatewayclass-controller
    parentRef:
      group: gateway.networking.k8s.io
      kind: Gateway
      name: envoy-gateway
github-actions[bot] commented 2 months ago

This issue has been automatically marked as stale because it has not had activity in the last 30 days.

arkodg commented 1 month ago

@JaeGerW2016 can you help articulate why this is a Envoy Gateway issue and not a Cilium CNI issue ?

JaeGerW2016 commented 1 month ago

@JaeGerW2016 can you help articulate why this is a Envoy Gateway issue and not a Cilium CNI issue ?

The same Moon service binds the Envoy Proxy gateway and the Kubernetes built-in service through the Cilium CNI load balancer. The Kubernetes built-in service can normally publish to the outside of the cluster, but the gateway cannot. After checking the CNI LB list, it appears that after forwarding to the gateway’s port 10080, there is no forwarding to the backend.

JaeGerW2016 commented 1 month ago

In order to rule out that the issue with the Cilium CNI was causing the application to not be published, I used the Istio Gateway CRD to deploy the same Moon service, and it was successful. Then, after redeploying the Envoy Gateway version v1.1.0 of the Gateway API, I was able to successfully exposed the application outside the Kubernetes cluster. Therefore, it seems that the issue was caused by a bug in version v1.0.2 of the Envoy Gateway.