istio / istio

Connect, secure, control, and observe services.
https://istio.io
Apache License 2.0
35.94k stars 7.76k forks source link

ServiceEntry removes route and endpoint for MESH_INTERNAL service #53127

Closed abaguas closed 1 month ago

abaguas commented 1 month ago

Is this the right place to submit this?

Bug Description

I have a pod in my service mesh that sends traffic to 100.113.113.113. I cannot change this IP address. And I want to route the traffic to a kubernetes service that is part of my service mesh. On istio v1.21.5 I could achieve the above by setting the following ServiceEntry:

apiVersion: networking.istio.io/v1
kind: ServiceEntry
metadata:
  name: proxy-pac-12428
spec:
  addresses:
  - 100.113.113.113
  exportTo:
  - '.'
  hosts:
  - proxy-pac.proxy-3971-12428.svc.cluster.local
  location: MESH_INTERNAL
  ports:
  - name: http
    number: 8081
    protocol: tcp
  resolution: STATIC

However, after upgrading to istio v22.x and v1.23.1 this is no longer working. I inspected the sidecar config on the source pod and noticed the listener no longer exists:

╰─➤  istioctl pc l idac-0 | grep "proxy-pac.proxy-3971-12428"
<empty>

The route, endpoint and cluster are still there:

╰─➤  istioctl pc routes idac-0 | grep "proxy-pac.proxy-3971-12428"
8081                                                                                      proxy-pac.proxy-3971-12428.svc.cluster.local:8081                                                 proxy-pac.proxy-3971-12428, 172.17.198.228                                                /*

╰─➤  istioctl pc endpoints idac-0 | grep "proxy-pac.proxy-3971-12428"
172.16.41.14:80                                         HEALTHY     OK                outbound|8081||proxy-pac.proxy-3971-12428.svc.cluster.local

╰─➤  istioctl pc clusters idac-0 | grep "proxy-pac.proxy-3971-12428"
proxy-pac.proxy-3971-12428.svc.cluster.local                                                 8081      -                                                                  outbound      EDS

I couldn't find a reference to this issue, but I found #53000 where unexpected ServiceEntry behavior is described and added a Sidecar resource as follows:

apiVersion: networking.istio.io/v1
kind: Sidecar
metadata:
  name: sidecar
spec:
  egress:
  - hosts:
    - "*/*"

This brought the listener back, but wiped the endpoint and route:

╰─➤  istioctl pc listeners idac-0 | grep "proxy-pac.proxy-3971-12428"
100.113.113.113 8081  ALL                                                     Cluster: outbound|8081||proxy-pac.proxy-3971-12428.svc.cluster.local

╰─➤  istioctl pc endpoints idac-0 | grep "proxy-pac.proxy-3971-12428"

╰─➤  istioctl pc routes idac-0 | grep "proxy-pac.proxy-3971-12428"

╰─➤  istioctl pc clusters idac-0 | grep "proxy-pac.proxy-3971-12428"
proxy-pac.proxy-3971-12428.svc.cluster.local                                                 8081      -                                                                  outbound      EDS

This is rather unexpected since a Sidecar with egress */* should not reduce visibility. Let me know if you need any additional information.

Version

$ istioctl version
client version: 1.23.0
control plane version: 1.23.1
data plane version: 1.23.1 (228 proxies)
$ kubectl version                                                                                                                                                   1 ↵
Client Version: v1.31.0
Kustomize Version: v5.4.2
Server Version: v1.30.3

Additional Information

No response

howardjohn commented 1 month ago

You configuration is essentially broken, the fact it worked on older versions is really the bug..

apiVersion: networking.istio.io/v1
kind: ServiceEntry
metadata:
  name: proxy-pac-12428
spec:
  addresses:
  - 100.113.113.113
  exportTo:
  - '.'
  hosts:
  - proxy-pac.proxy-3971-12428.svc.cluster.local
  location: MESH_INTERNAL
  ports:
  - name: http
    number: 8081
    protocol: tcp
  resolution: STATIC

This means "Match the VIP 100.113.113.113 or the hostname proxy-pac.proxy-3971-12428.svc.cluster.local, and send to the set of endpoints I listed.... which is empty".

So the Sidecar config makes it correctly send to an empty set of endpoints, just like you configured.

The issue before was it was some weird amalgamation where half of each config was being applied and somehow worked. Note the issue of requiring a sidecar is fixed in https://github.com/istio/istio/pull/51776 which should merge soon.

What you want is this:

apiVersion: networking.istio.io/v1
kind: ServiceEntry
metadata:
  name: proxy-pac-12428
spec:
  addresses:
  - 100.113.113.113
  exportTo:
  - '.'
  hosts:
  - dummy-proxy-pac-12428.internal # Not used, just a lookup key
  location: MESH_INTERNAL
  ports:
  - name: http
    number: 8081
    protocol: tcp
  resolution: STATIC
---
apiVersion: networking.istio.io/v1
kind: VirtualService
metadata:
  name: proxy-pac-12428
spec:
  hosts:
  - dummy-proxy-pac-12428.internal # Not used, just a lookup key
  tcp:
  - route:
    - destination:
        host: proxy-pac.proxy-3971-12428.svc.cluster.local
        port:
          number: 8081

You could also do this, but its strictly worse -- it will send to the service IP, bypassing mosdt Istio functionality:

apiVersion: networking.istio.io/v1
kind: ServiceEntry
metadata:
  name: proxy-pac-12428
spec:
  addresses:
  - 100.113.113.113
  exportTo:
  - '.'
  hosts:
  - dummy-proxy-pac-12428.internal # Not used, just a lookup key
  location: MESH_INTERNAL
  ports:
  - name: http
    number: 8081
    protocol: tcp
  resolution: DNS
  endpoints:
  - address: proxy-pac.proxy-3971-12428.svc.cluster.local
abaguas commented 1 month ago

Thank you for the in-depth explanation and for the Sidecar fix @howardjohn. Given the previous behavior I thought the hosts key would create a route to the cluster created from the Kubernetes service. Now it is clear that it is just a lookup key and the VirtualService is required.