projectcontour / contour

Contour is a Kubernetes ingress controller using Envoy proxy.
https://projectcontour.io
Apache License 2.0
3.71k stars 673 forks source link

ExtensionService does not work when a Knative service is specified in ExtensionService #5399

Open kahirokunn opened 1 year ago

kahirokunn commented 1 year ago

What steps did you take and what happened: [A clear and concise description of what the bug is.]

ExtensionService does not work when a Knative service is specified in ExtensionService. I have verified the following configuration ExtensionService was Valid though, I did not receive gRPC from envoy. image

It works with spec.selector as follows.

apiVersion: v1
kind: Service
metadata:
  annotations:
    autoscaling.knative.dev/class: kpa.autoscaling.knative.dev
    autoscaling.knative.dev/min-scale: "1"
    service.kubernetes.io/topology-aware-hints: auto
    serving.knative.dev/creator: system:serviceaccount:argocd:argocd-application-controller
  creationTimestamp: "2023-05-23T13:49:46Z"
  labels:
    app: htpasswd-00002
    networking.internal.knative.dev/serverlessservice: htpasswd-00002
    networking.internal.knative.dev/serviceType: Private
    serving.knative.dev/configuration: htpasswd
    serving.knative.dev/configurationGeneration: "2"
    serving.knative.dev/configurationUID: a167ea05-e4f3-4760-a655-6f90d4d3d901
    serving.knative.dev/revision: htpasswd-00002
    serving.knative.dev/revisionUID: 64b65810-1d84-49e1-b05b-6719c221ef94
    serving.knative.dev/service: htpasswd
    serving.knative.dev/serviceUID: 296104df-176f-4c4e-87bd-12e98f6426b1
  name: htpasswd-00002-private
  namespace: projectcontour-auth
  ownerReferences:
  - apiVersion: networking.internal.knative.dev/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: ServerlessService
    name: htpasswd-00002
    uid: c115027d-5d04-48f1-8e1a-f4336edd2e80
  resourceVersion: "43611844"
  uid: abc0823b-019c-4841-959f-60a31476d69c
spec:
  clusterIP: 10.67.215.173
  clusterIPs:
  - 10.67.215.173
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - name: http2
    port: 80
    protocol: TCP
    targetPort: 8013
  - name: https
    port: 443
    protocol: TCP
    targetPort: 8112
  - name: http-autometric
    port: 9090
    protocol: TCP
    targetPort: http-autometric
  - name: http-usermetric
    port: 9091
    protocol: TCP
    targetPort: http-usermetric
  - name: http-queueadm
    port: 8022
    protocol: TCP
    targetPort: 8022
  - name: http2-istio
    port: 8013
    protocol: TCP
    targetPort: 8013
  selector:
    serving.knative.dev/revisionUID: 64b65810-1d84-49e1-b05b-6719c221ef94
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

However, in today's services, there are many cases where spec.selector is not present. For example, it could be a service created by Cilium, or a service created by Knative.

apiVersion: v1
kind: Service
metadata:
  annotations:
    argocd.argoproj.io/tracking-id: contour-htpasswd:serving.knative.dev/Service:projectcontour-auth/htpasswd
    service.kubernetes.io/topology-aware-hints: auto
    serving.knative.dev/creator: system:serviceaccount:argocd:argocd-application-controller
    serving.knative.dev/lastModifier: system:serviceaccount:argocd:argocd-application-controller
  creationTimestamp: "2023-05-23T13:45:23Z"
  labels:
    argocd.argoproj.io/instance: contour-htpasswd
    serving.knative.dev/route: htpasswd
    serving.knative.dev/service: htpasswd
  name: htpasswd
  namespace: projectcontour-auth
  ownerReferences:
  - apiVersion: serving.knative.dev/v1
    blockOwnerDeletion: true
    controller: true
    kind: Route
    name: htpasswd
    uid: 33c1d1cd-875a-46d2-b458-61aeb3979f98
  resourceVersion: "43607899"
  uid: db73e3d0-41e5-4df5-a640-efae9dfc9eea
spec:
  clusterIP: None
  clusterIPs:
  - None
  internalTrafficPolicy: Cluster
  ipFamilies:
  - IPv4
  - IPv6
  ipFamilyPolicy: RequireDualStack
  ports:
  - name: http2
    port: 80
    protocol: TCP
    targetPort: 80
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

The common denominator is that the Operator creates the Endpoints object.

apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    argocd.argoproj.io/tracking-id: contour-htpasswd:serving.knative.dev/Service:projectcontour-auth/htpasswd
    serving.knative.dev/creator: system:serviceaccount:argocd:argocd-application-controller
    serving.knative.dev/lastModifier: system:serviceaccount:argocd:argocd-application-controller
  creationTimestamp: "2023-05-23T13:45:23Z"
  labels:
    argocd.argoproj.io/instance: contour-htpasswd
    serving.knative.dev/route: htpasswd
    serving.knative.dev/service: htpasswd
  name: htpasswd
  namespace: projectcontour-auth
  ownerReferences:
  - apiVersion: serving.knative.dev/v1
    blockOwnerDeletion: true
    controller: true
    kind: Route
    name: htpasswd
    uid: 33c1d1cd-875a-46d2-b458-61aeb3979f98
  resourceVersion: "43607900"
  uid: 617da6ee-1c53-4440-9126-8120c96d6131
subsets:
- addresses:
  - ip: 10.67.56.0
  ports:
  - name: http2
    port: 80
    protocol: TCP

In the case of Cilium, it is important to access the IP of the service, not the IP of the endpoints.

With the current Contour, I can't combine it with services that do advanced things like Cilium, Knative, etc. Can you do something about this?

What did you expect to happen:

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

Environment:

sidharthramesh commented 1 year ago

Very much related to #5396

github-actions[bot] commented 1 year ago

The Contour project currently lacks enough contributors to adequately respond to all Issues.

This bot triages Issues according to the following rules:

You can:

Please send feedback to the #contour channel in the Kubernetes Slack