knative-extensions / net-istio

A Knative ingress controller for Istio.
Apache License 2.0
76 stars 89 forks source link

net-istio-controller probing fails with 403 error, Revisions stuck on "Waiting for load balancer to be ready" #700

Closed karlschriek closed 3 years ago

karlschriek commented 3 years ago

In the argoflow-aws project we are putting together a Kubeflow distribution that (among other things of course) implements Istio 1.10.2, KNative 0.24 and KFServing 0.6.0.

We are now fairly close to having a working solution, but are currently not able to reliably serve any KFServing models. KFServing's interaction with KNative is essentially just as another layer of domain-specific abstraction, so what we are dealing with here is basically just KNative and Istio.

Rolling out a KFServing InferenceService results in KNative Revisions being successfully created. Pods are healthy. However, Routes stay in an Uninitialized state with "Waiting for load balancer to be ready" reported in the status.

We have STRICT mTLS enabled and sidecars injected in all KNative Pods.

If we restart the istio-ingressgateway Pod after the Revision has come online there seems to be a small window of opportunity during which the probes succeed and the Route comes online. We are struggling to get to the bottom of why thes probes are not going through. Any ideas on how we could figure out what is going on?

Below is a selection of relevant information.

@DavidSpek FYI, please add anything else you think is relevant.

Route status:

  conditions:
    - lastTransitionTime: '2021-07-09T15:57:09Z'
      status: 'True'
      type: AllTrafficAssigned
    - lastTransitionTime: '2021-07-09T15:57:09Z'
      message: autoTLS is not enabled
      reason: TLSNotEnabled
      status: 'True'
      type: CertificateProvisioned
    - lastTransitionTime: '2021-07-09T15:57:09Z'
      message: Waiting for load balancer to be ready
      reason: Uninitialized
      status: Unknown
      type: IngressReady
    - lastTransitionTime: '2021-07-09T15:57:09Z'
      message: Waiting for load balancer to be ready
      reason: Uninitialized
      status: Unknown
      type: Ready

The net-istio-controller reports that is not able to probe the desired endpoint. (10.10.200.78 refers to the IP address of the istio-ingressgateway Pod)

severity: ERROR
timestamp: '2021-07-09T15:57:35.292708708Z'
logger: net-istio-controller.istio-ingress-controller
caller: status/status.go:403
message: 'Probing of https://def-predictor-default-karl-schriek.serving.dev-kfserving-test-5.build-mlops-2.com:443 failed, IP: 10.10.200.78:8443, ready: false, error: error roundtripping https://def-predictor-default-karl-schriek.serving.dev-kfserving-test-5.build-mlops-2.com:443/healthz: EOF (depth: 0)'
commit: 2b5e707
knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
knative.dev/kind: networking.internal.knative.dev.Ingress
knative.dev/traceid: b8906902-48ff-4a7a-b793-dd35f5559a51
knative.dev/key: karl-schriek/def-predictor-default
stacktrace: "knative.dev/networking/pkg/status.(*Prober).processWorkItem\n\tknative.dev/networking@v0.0.0-20210622182128-53f45d6d2cfa/pkg/status/status.go:403\nknative.dev/networking/pkg/status.(*Prober).Start.func1\n\tknative.dev/networking@v0.0.0-20210622182128-53f45d6d2cfa/pkg/status/status.go:288"

The activator also reports a failing probe (172.20.60.67 is the IP of the SVC in front of the "def-predictor-default-00001" Pod)

severity: ERROR
timestamp: '2021-07-09T15:57:09.536270508Z'
logger: activator
caller: net/revision_backends.go:357
message: Failed to probe clusterIP 172.20.60.67:80
commit: c75484e
knative.dev/controller: activator
knative.dev/pod: activator-dfc4f7578-qqm8p
knative.dev/key: karl-schriek/def-predictor-default-00001
error: 'unexpected body: want "queue", got "no healthy upstream"'
stacktrace: "knative.dev/serving/pkg/activator/net.(*revisionWatcher).checkDests\n\tknative.dev/serving/pkg/activator/net/revision_backends.go:357\nknative.dev/serving/pkg/activator/net.(*revisionWatcher).run\n\tknative.dev/serving/pkg/activator/net/revision_backends.go:402"
---
severity: WARNING
timestamp: '2021-07-09T15:57:09.73640038Z'
logger: activator
caller: net/revision_backends.go:310
message: Failed probing pods
commit: c75484e
knative.dev/controller: activator
knative.dev/pod: activator-dfc4f7578-qqm8p
knative.dev/key: karl-schriek/def-predictor-default-00001
curDests:
  ready: 10.10.200.232:8012
  notReady: ''
error: 'unexpected body: want "queue", got "upstream connect error or disconnect/reset
  before headers. reset reason: connection termination"'

We have several AuthorizationPolicies in effect. Ones that are likely to be relevant include:

apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: ns-owner-access-istio
  namespace: karl-schriek   # This is the namespace where the KNative Service is rolled out
spec:
  rules:
    - when:
        - key: 'request.headers[kubeflow-userid]'
          values:
            - karl.schriek@myemaildomain.com
    - when:
        - key: source.namespace
          values:
            - karl-schriek
    - to:
        - operation:
            paths:
              - /healthz
              - /metrics
              - /ready
              - /wait-for-drain
              - /v1/models/*
      when:
        - key: source.namespace
          values:
            - knative-serving
---
# Policy to force calls to these hosts to first gain authorization via oauth2-proxy
# NOTE that we have tried removing the two "serving" hosts, which is where the KNative Route ends up, but it makes no difference
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
  name: istio-ingressgateway
  namespace: istio-system
spec:
  action: CUSTOM
  selector:
    matchLabels:
      app: istio-ingressgateway
      istio: ingressgateway
  provider:
    name: oauth2-proxy
  rules:
  - to:
    - operation:
        hosts:
        - kubeflow.dev-kfserving-test-5.build-mlops-2.com
        - serving.dev-kfserving-test-5.build-mlops-2.com
        - '*.serving.dev-kfserving-test-5.build-mlops-2.com'
        notPaths:
       - /healthz
karlschriek commented 3 years ago

A bit more info, after discovering https://github.com/knative-sandbox/net-istio/pull/355/files we have now set enable-virtualservice-status: 'true' in the config-istio ConfigMap.

The result is that net-istio-controller no longer reports any errors. If I have understood the PR above correctly, it now just asks the VS for the status, rather than attempting to send a probe to the istio-ingressgateway.

The activator however continues to report the same error, and the Route remains in status "waiting for load balancer to be ready". The error message "no healthy upstream" seems to suggest that the requested endpoint cannot be reached, but it is rather unspecific, so hard to pin down. What endpoint is the probe attempting to query here? The AuthorizationPolicy as posted above is intended to let the activator probe specific endpoints in the karl-schriek Namespace:

    - to:
        - operation:
            paths:
              - /healthz
              - /metrics
              - /ready
              - /wait-for-drain
              - /v1/models/*
      when:
        - key: source.namespace
          values:
            - knative-serving

Presumably what we are seeing is related to https://github.com/knative-sandbox/net-istio/issues/554 as well. Since we have mTLS set to STRICT, the activator is probing the SVC. But the probes are still not getting through it seems

karlschriek commented 3 years ago

After setting log-level to DEBUG on activator, here are the last few logs before the probing stops. Initially clusterIP probing fails but in the end it does succeed it seems. So does the issue still remain at the net-istio-controller? That is after all what is supposed to handle the load balancer readiness if I undestood correctly.

---
- severity: DEBUG
  timestamp: '2021-07-12T08:34:09.811656275Z'
  logger: activator
  caller: net/revision_backends.go:381
  message: Revision state
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7
  knative.dev/key: karl-schriek/three-predictor-default-00001
  dests:
    ready: 10.10.200.214:8012
    notReady: ''
  healthy:
    keys: ''
  clusterIPHealthy: false

- severity: DEBUG
  timestamp: '2021-07-12T08:34:09.81166815Z'
  logger: activator
  caller: net/revision_backends.go:386
  message: Probing on timer
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7
  knative.dev/key: karl-schriek/three-predictor-default-00001

- severity: WARNING
  timestamp: '2021-07-12T08:34:09.842323507Z'
  logger: activator
  caller: net/revision_backends.go:310
  message: Failed probing pods
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7
  knative.dev/key: karl-schriek/three-predictor-default-00001
  curDests:
    ready: 10.10.200.214:8012
    notReady: ''
  error: 'unexpected body: want "queue", got "upstream connect error or disconnect/reset
    before headers. reset reason: connection termination"'

- severity: DEBUG
  timestamp: '2021-07-12T08:34:09.842367449Z'
  logger: activator
  caller: net/revision_backends.go:316
  message: Done probing, got 0 healthy pods
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7
  knative.dev/key: karl-schriek/three-predictor-default-00001

- severity: ERROR
  timestamp: '2021-07-12T08:34:09.843657049Z'
  logger: activator
  caller: net/revision_backends.go:357
  message: Failed to probe clusterIP 172.20.62.187:80
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7
  knative.dev/key: karl-schriek/three-predictor-default-00001
  error: 'unexpected body: want "queue", got "no healthy upstream"'
  stacktrace: "knative.dev/serving/pkg/activator/net.(*revisionWatcher).checkDests\n\tknative.dev/serving/pkg/activator/net/revision_backends.go:357\nknative.dev/serving/pkg/activator/net.(*revisionWatcher).run\n\tknative.dev/serving/pkg/activator/net/revision_backends.go:402"

- severity: DEBUG
  timestamp: '2021-07-12T08:34:09.843703612Z'
  logger: activator
  caller: net/revision_backends.go:381
  message: Revision state
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7
  knative.dev/key: karl-schriek/three-predictor-default-00001
  dests:
    ready: 10.10.200.214:8012
    notReady: ''
  healthy:
    keys: ''
  clusterIPHealthy: false

- severity: DEBUG
  timestamp: '2021-07-12T08:34:09.843714974Z'
  logger: activator
  caller: net/revision_backends.go:386
  message: Probing on timer
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7
  knative.dev/key: karl-schriek/three-predictor-default-00001

- severity: DEBUG
  timestamp: '2021-07-12T08:34:09.972209553Z'
  logger: activator
  caller: net/throttler.go:561
  message: Revision update{knative.dev/key 15 0 karl-schriek/three-predictor-default-00001
    <nil>}
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7

- severity: WARNING
  timestamp: '2021-07-12T08:34:10.041874532Z'
  logger: activator
  caller: net/revision_backends.go:310
  message: Failed probing pods
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7
  knative.dev/key: karl-schriek/three-predictor-default-00001
  curDests:
    ready: 10.10.200.214:8012
    notReady: ''
  error: 'unexpected body: want "queue", got "upstream connect error or disconnect/reset
    before headers. reset reason: connection termination"'

- severity: DEBUG
  timestamp: '2021-07-12T08:34:10.041926169Z'
  logger: activator
  caller: net/revision_backends.go:316
  message: Done probing, got 0 healthy pods
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7
  knative.dev/key: karl-schriek/three-predictor-default-00001

- severity: DEBUG
  timestamp: '2021-07-12T08:34:10.109112865Z'
  logger: activator
  caller: net/revision_backends.go:362
  message: 'ClusterIP is successfully probed: 172.20.62.187:80 (ready backends: 1)'
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7
  knative.dev/key: karl-schriek/three-predictor-default-00001

- severity: DEBUG
  timestamp: '2021-07-12T08:34:10.109153461Z'
  logger: activator
  caller: net/revision_backends.go:381
  message: Revision state
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7
  knative.dev/key: karl-schriek/three-predictor-default-00001
  dests:
    ready: 10.10.200.214:8012
    notReady: ''
  healthy:
    keys: ''
  clusterIPHealthy: true

- severity: DEBUG
  timestamp: '2021-07-12T08:34:10.109185935Z'
  logger: activator
  caller: net/revision_backends.go:389
  message: Not Probing on timer
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7
  knative.dev/key: karl-schriek/three-predictor-default-00001

- severity: DEBUG
  timestamp: '2021-07-12T08:34:10.109200922Z'
  logger: activator
  caller: net/throttler.go:408
  message: Handling update
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7
  knative.dev/key: karl-schriek/three-predictor-default-00001
  ClusterIP: 172.20.62.187:80
  dests:
    keys: 10.10.200.214:8012

- severity: DEBUG
  timestamp: '2021-07-12T08:34:10.109200922Z'
  logger: activator
  caller: net/throttler.go:408
  message: Handling update
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7
  knative.dev/key: karl-schriek/three-predictor-default-00001
  ClusterIP: 172.20.62.187:80
  dests:
    keys: 10.10.200.214:8012

- severity: INFO
  timestamp: '2021-07-12T08:34:10.109220707Z'
  logger: activator
  caller: net/throttler.go:324
  message: 'Updating Revision Throttler with: clusterIP = 172.20.62.187:80, trackers
    = 0, backends = 1'
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7
  knative.dev/key: karl-schriek/three-predictor-default-00001

- severity: INFO
  timestamp: '2021-07-12T08:34:10.109234654Z'
  logger: activator
  caller: net/throttler.go:316
  message: 'Set capacity to 2147483647 (backends: 1, index: 0/1)'
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7
  knative.dev/key: karl-schriek/three-predictor-default-00001

- severity: DEBUG
  timestamp: '2021-07-12T08:34:12.746751184Z'
  logger: activator
  caller: net/throttler.go:561
  message: Revision update{knative.dev/key 15 0 karl-schriek/three-predictor-default-00001
    <nil>}
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7

- severity: DEBUG
  timestamp: '2021-07-12T08:34:16.228571114Z'
  logger: activator
  caller: activator/main.go:280
  message: No signal yet.
  commit: c75484e
  knative.dev/controller: activator
  knative.dev/pod: activator-dfc4f7578-6qxl7
karlschriek commented 3 years ago

Now looking at the net-istio-controller's full debugging logs (pasted below, apologies it is quite long) I can see the Route status being set to "Waiting for load balancer to be ready". I also eventually see "Ingress successfully synced" and "Reconcile succeeded". To be more specific, I see these message four times, so even though the sync is appararently successfull, the controller keeps attempting it and the sync and even after reporting "Reconcile succeeded", it still updates the status with "Waiting for load balancer to be ready".

EDIT: also, even though it is reporting "Reconcile succeeded" there is line saying that the VirtualService is not ready

Virtual Service "four-predictor-default-mesh" hasStatus=true, ready=false;
    skipping checks for others

So I suppose "Reconcile succeeded" simply means "the most recent status has been found and updated, but the actual status is still ready=false.

Logs:

---
- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.091243159Z'
  logger: net-istio-controller
  caller: controller/controller.go:409
  message: 'Adding to queue karl-schriek/four-predictor-default-00001 (depth: 1)'
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.serverlessservice.reconciler
  knative.dev/kind: networking.internal.knative.dev.ServerlessService
  knative.dev/key: karl-schriek/four-predictor-default-00001

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.091293709Z'
  logger: net-istio-controller
  caller: controller/controller.go:502
  message: 'Processing from queue karl-schriek/four-predictor-default-00001 (depth:
    0)'
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.serverlessservice.reconciler
  knative.dev/kind: networking.internal.knative.dev.ServerlessService

- severity: INFO
  timestamp: '2021-07-12T09:03:08.091331231Z'
  logger: net-istio-controller
  caller: controller/controller.go:539
  message: Reconcile succeeded
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.serverlessservice.reconciler
  knative.dev/kind: networking.internal.knative.dev.ServerlessService
  knative.dev/traceid: 6d590051-ce9b-4d2f-9023-815c7364fead
  knative.dev/key: karl-schriek/four-predictor-default-00001
  duration: 29.976µs

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.191360047Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: controller/controller.go:409
  message: 'Adding to queue karl-schriek/four-predictor-default (depth: 1)'
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/key: karl-schriek/four-predictor-default

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.191423457Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: controller/controller.go:502
  message: 'Processing from queue karl-schriek/four-predictor-default (depth: 0)'
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress

- severity: INFO
  timestamp: '2021-07-12T09:03:08.212263317Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:113
  message: 'Reconciling ingress: &v1alpha1.Ingress{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""},
    ObjectMeta:v1.ObjectMeta{Name:"four-predictor-default", GenerateName:"", Namespace:"karl-schriek",
    SelfLink:"", UID:"f19868fa-d358-4062-a700-549c3ca61552", ResourceVersion:"6843031",
    Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761677388,
    loc:(*time.Location)(0x2fad520)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil),
    Labels:map[string]string{"component":"predictor", "serving.knative.dev/route":"four-predictor-default",
    "serving.knative.dev/routeNamespace":"karl-schriek", "serving.knative.dev/service":"four-predictor-default",
    "serving.kubeflow.org/inferenceservice":"four"}, Annotations:map[string]string{"networking.internal.knative.dev/rollout":"{\"configurations\":[{\"configurationName\":\"four-predictor-default\",\"percent\":100,\"revisions\":[{\"revisionName\":\"four-predictor-default-00001\",\"percent\":100}],\"stepParams\":{}}]}",
    "networking.knative.dev/ingress.class":"istio.ingress.networking.knative.dev",
    "serving.knative.dev/creator":"system:serviceaccount:kubeflow:default", "serving.knative.dev/lastModifier":"system:serviceaccount:kubeflow:default"},
    OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"serving.knative.dev/v1",
    Kind:"Route", Name:"four-predictor-default", UID:"6cb2e1af-8a94-42ba-b4f9-30dcf4f18087",
    Controller:(*bool)(0xc0011fe81d), BlockOwnerDeletion:(*bool)(0xc0011fe81c)}},
    Finalizers:[]string{"ingresses.networking.internal.knative.dev"}, ClusterName:"",
    ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"controller",
    Operation:"Update", APIVersion:"networking.internal.knative.dev/v1alpha1", Time:(*v1.Time)(0xc001201920),
    FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001201900)}}}, Spec:v1alpha1.IngressSpec{TLS:[]v1alpha1.IngressTLS(nil),
    Rules:[]v1alpha1.IngressRule{v1alpha1.IngressRule{Hosts:[]string{"four-predictor-default.karl-schriek",
    "four-predictor-default.karl-schriek.svc", "four-predictor-default.karl-schriek.svc.cluster.local"},
    Visibility:"ClusterLocal", HTTP:(*v1alpha1.HTTPIngressRuleValue)(0xc001201960)},
    v1alpha1.IngressRule{Hosts:[]string{"four-predictor-default-karl-schriek.serving.dev-kfserving-test-5.build-mlops-2.com"},
    Visibility:"ExternalIP", HTTP:(*v1alpha1.HTTPIngressRuleValue)(0xc001201980)}},
    HTTPOption:"Redirected", DeprecatedVisibility:""}, Status:v1alpha1.IngressStatus{Status:v1.Status{ObservedGeneration:0,
    Conditions:v1.Conditions{apis.Condition{Type:"LoadBalancerReady", Status:"Unknown",
    Severity:"", LastTransitionTime:apis.VolatileTime{Inner:v1.Time{Time:time.Time{wall:0xc0331f330ca52fe8,
    ext:57784147857, loc:(*time.Location)(0x2fad520)}}}, Reason:"", Message:""}, apis.Condition{Type:"NetworkConfigured",
    Status:"Unknown", Severity:"", LastTransitionTime:apis.VolatileTime{Inner:v1.Time{Time:time.Time{wall:0xc0331f330ca524fc,
    ext:57784145061, loc:(*time.Location)(0x2fad520)}}}, Reason:"", Message:""}, apis.Condition{Type:"Ready",
    Status:"Unknown", Severity:"", LastTransitionTime:apis.VolatileTime{Inner:v1.Time{Time:time.Time{wall:0xc0331f330ca54e9b,
    ext:57784155725, loc:(*time.Location)(0x2fad520)}}}, Reason:"NewObservedGenFailure",
    Message:"unsuccessfully observed a new generation"}}, Annotations:map[string]string(nil)},
    DeprecatedLoadBalancer:(*v1alpha1.LoadBalancerStatus)(nil), PublicLoadBalancer:(*v1alpha1.LoadBalancerStatus)(nil),
    PrivateLoadBalancer:(*v1alpha1.LoadBalancerStatus)(nil)}}'
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: 7f092349-4516-4212-a86d-50cdea965703
  knative.dev/key: karl-schriek/four-predictor-default

- severity: INFO
  timestamp: '2021-07-12T09:03:08.212603927Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:185
  message: Creating/Updating VirtualServices
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: 7f092349-4516-4212-a86d-50cdea965703
  knative.dev/key: karl-schriek/four-predictor-default

- severity: INFO
  timestamp: '2021-07-12T09:03:08.213560498Z'
  logger: net-istio-controller.istio-ingress-controller.event-broadcaster
  caller: record/event.go:282
  message: 'Event(v1.ObjectReference{Kind:"Ingress", Namespace:"karl-schriek", Name:"four-predictor-default",
    UID:"f19868fa-d358-4062-a700-549c3ca61552", APIVersion:"networking.internal.knative.dev/v1alpha1",
    ResourceVersion:"6843031", FieldPath:""}): type: ''Normal'' reason: ''FinalizerUpdate''
    Updated "four-predictor-default" finalizers'
  commit: 2b5e707
  knative.dev/controller: istio-ingress-controller

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.21386114Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: controller/controller.go:409
  message: 'Adding to queue karl-schriek/four-predictor-default (depth: 1)'
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/key: karl-schriek/four-predictor-default

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.279304306Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: controller/controller.go:409
  message: 'Adding to queue karl-schriek/four-predictor-default (depth: 1)'
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/key: karl-schriek/four-predictor-default

- severity: INFO
  timestamp: '2021-07-12T09:03:08.279369947Z'
  logger: net-istio-controller.istio-ingress-controller.event-broadcaster
  caller: record/event.go:282
  message: 'Event(v1.ObjectReference{Kind:"Ingress", Namespace:"karl-schriek", Name:"four-predictor-default",
    UID:"f19868fa-d358-4062-a700-549c3ca61552", APIVersion:"networking.internal.knative.dev/v1alpha1",
    ResourceVersion:"6843031", FieldPath:""}): type: ''Normal'' reason: ''Created''
    Created VirtualService "four-predictor-default-mesh"'
  commit: 2b5e707
  knative.dev/controller: istio-ingress-controller

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.292436841Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:559
  message: 'VirtualService four-predictor-default-mesh, status: v1alpha1.IstioStatus{Conditions:[]*v1alpha1.IstioCondition(nil),
    ValidationMessages:[]*v1alpha1.AnalysisMessageBase(nil), ObservedGeneration:0,
    XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}'
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: 7f092349-4516-4212-a86d-50cdea965703
  knative.dev/key: karl-schriek/four-predictor-default

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.292466736Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:572
  message: VirtualService four-predictor-default-mesh status is stale; checking again...
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: 7f092349-4516-4212-a86d-50cdea965703
  knative.dev/key: karl-schriek/four-predictor-default

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.292476482Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:532
  message: Virtual Service "four-predictor-default-mesh" hasStatus=true, ready=false;
    skipping checks for others.
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: 7f092349-4516-4212-a86d-50cdea965703
  knative.dev/key: karl-schriek/four-predictor-default

- severity: INFO
  timestamp: '2021-07-12T09:03:08.292494995Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:234
  message: Ingress successfully synced
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: 7f092349-4516-4212-a86d-50cdea965703
  knative.dev/key: karl-schriek/four-predictor-default

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.292674216Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/reconciler.go:347
  message: "Updating status with: v1alpha1.IngressStatus{\n \tStatus: v1.Status{\n-
    \t\tObservedGeneration: 0,\n+ \t\tObservedGeneration: 1,\n- \t\tConditions: nil,\n+
    \t\tConditions: v1.Conditions{\n+ \t\t\t{\n+ \t\t\t\tType: \"LoadBalancerReady\",\n+
    \t\t\t\tStatus: \"Unknown\",\n+ \t\t\t\tLastTransitionTime: apis.VolatileTime{Inner:
    s\"2021-07-12 09:03:08.292505398 +0\"...},\n+ \t\t\t\tReason: \"Uninitialized\",\n+
    \t\t\t\tMessage: \"Waiting for load balancer to be ready\",\n+ \t\t\t},\n+ \t\t\t{\n+
    \t\t\t\tType: \"NetworkConfigured\",\n+ \t\t\t\tStatus: \"True\",\n+ \t\t\t\tLastTransitionTime:
    apis.VolatileTime{Inner: s\"2021-07-12 09:03:08.292505398 +0\"...},\n+ \t\t\t},\n+
    \t\t\t{\n+ \t\t\t\tType: \"Ready\",\n+ \t\t\t\tStatus: \"Unknown\",\n+ \t\t\t\tLastTransitionTime:
    apis.VolatileTime{Inner: s\"2021-07-12 09:03:08.292505398 +0\"...},\n+ \t\t\t\tReason:
    \"Uninitialized\",\n+ \t\t\t\tMessage: \"Waiting for load balancer to be ready\",\n+
    \t\t\t},\n+ \t\t},\n \t\tAnnotations: nil,\n \t},\n \tDeprecatedLoadBalancer:
    nil,\n \tPublicLoadBalancer: nil,\n \tPrivateLoadBalancer: nil,\n }\n"
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: 7f092349-4516-4212-a86d-50cdea965703
  knative.dev/key: karl-schriek/four-predictor-default

- severity: INFO
  timestamp: '2021-07-12T09:03:08.293145837Z'
  logger: net-istio-controller.istio-ingress-controller.event-broadcaster
  caller: record/event.go:282
  message: 'Event(v1.ObjectReference{Kind:"Ingress", Namespace:"karl-schriek", Name:"four-predictor-default",
    UID:"f19868fa-d358-4062-a700-549c3ca61552", APIVersion:"networking.internal.knative.dev/v1alpha1",
    ResourceVersion:"6843031", FieldPath:""}): type: ''Normal'' reason: ''Created''
    Created VirtualService "four-predictor-default-ingress"'
  commit: 2b5e707
  knative.dev/controller: istio-ingress-controller

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.294194408Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: controller/controller.go:409
  message: 'Adding to queue karl-schriek/four-predictor-default (depth: 1)'
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/key: karl-schriek/four-predictor-default

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.371895098Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/reconciler.go:347
  message: "Updating status with: v1alpha1.IngressStatus{\n \tStatus: v1.Status{\n-
    \t\tObservedGeneration: 0,\n+ \t\tObservedGeneration: 1,\n- \t\tConditions: nil,\n+
    \t\tConditions: v1.Conditions{\n+ \t\t\t{\n+ \t\t\t\tType: \"LoadBalancerReady\",\n+
    \t\t\t\tStatus: \"Unknown\",\n+ \t\t\t\tLastTransitionTime: apis.VolatileTime{Inner:
    s\"2021-07-12 09:03:08.292505398 +0\"...},\n+ \t\t\t\tReason: \"Uninitialized\",\n+
    \t\t\t\tMessage: \"Waiting for load balancer to be ready\",\n+ \t\t\t},\n+ \t\t\t{\n+
    \t\t\t\tType: \"NetworkConfigured\",\n+ \t\t\t\tStatus: \"True\",\n+ \t\t\t\tLastTransitionTime:
    apis.VolatileTime{Inner: s\"2021-07-12 09:03:08.292505398 +0\"...},\n+ \t\t\t},\n+
    \t\t\t{\n+ \t\t\t\tType: \"Ready\",\n+ \t\t\t\tStatus: \"Unknown\",\n+ \t\t\t\tLastTransitionTime:
    apis.VolatileTime{Inner: s\"2021-07-12 09:03:08.292505398 +0\"...},\n+ \t\t\t\tReason:
    \"Uninitialized\",\n+ \t\t\t\tMessage: \"Waiting for load balancer to be ready\",\n+
    \t\t\t},\n+ \t\t},\n \t\tAnnotations: nil,\n \t},\n \tDeprecatedLoadBalancer:
    nil,\n \tPublicLoadBalancer: nil,\n \tPrivateLoadBalancer: nil,\n }\n"
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: 7f092349-4516-4212-a86d-50cdea965703
  knative.dev/key: karl-schriek/four-predictor-default

- severity: INFO
  timestamp: '2021-07-12T09:03:08.395965928Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: controller/controller.go:539
  message: Reconcile succeeded
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: 7f092349-4516-4212-a86d-50cdea965703
  knative.dev/key: karl-schriek/four-predictor-default
  duration: 204.531481ms

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.396023791Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: controller/controller.go:502
  message: 'Processing from queue karl-schriek/four-predictor-default (depth: 0)'
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress

- severity: INFO
  timestamp: '2021-07-12T09:03:08.396179683Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:113
  message: 'Reconciling ingress: &v1alpha1.Ingress{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""},
    ObjectMeta:v1.ObjectMeta{Name:"four-predictor-default", GenerateName:"", Namespace:"karl-schriek",
    SelfLink:"", UID:"f19868fa-d358-4062-a700-549c3ca61552", ResourceVersion:"6843048",
    Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761677388,
    loc:(*time.Location)(0x2fad520)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil),
    Labels:map[string]string{"component":"predictor", "serving.knative.dev/route":"four-predictor-default",
    "serving.knative.dev/routeNamespace":"karl-schriek", "serving.knative.dev/service":"four-predictor-default",
    "serving.kubeflow.org/inferenceservice":"four"}, Annotations:map[string]string{"networking.internal.knative.dev/rollout":"{\"configurations\":[{\"configurationName\":\"four-predictor-default\",\"percent\":100,\"revisions\":[{\"revisionName\":\"four-predictor-default-00001\",\"percent\":100}],\"stepParams\":{}}]}",
    "networking.knative.dev/ingress.class":"istio.ingress.networking.knative.dev",
    "serving.knative.dev/creator":"system:serviceaccount:kubeflow:default", "serving.knative.dev/lastModifier":"system:serviceaccount:kubeflow:default"},
    OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"serving.knative.dev/v1",
    Kind:"Route", Name:"four-predictor-default", UID:"6cb2e1af-8a94-42ba-b4f9-30dcf4f18087",
    Controller:(*bool)(0xc001333150), BlockOwnerDeletion:(*bool)(0xc001333151)}},
    Finalizers:[]string{"ingresses.networking.internal.knative.dev"}, ClusterName:"",
    ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"controller",
    Operation:"Update", APIVersion:"networking.internal.knative.dev/v1alpha1", Time:(*v1.Time)(0xc001331420),
    FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001331440)}}}, Spec:v1alpha1.IngressSpec{TLS:[]v1alpha1.IngressTLS(nil),
    Rules:[]v1alpha1.IngressRule{v1alpha1.IngressRule{Hosts:[]string{"four-predictor-default.karl-schriek",
    "four-predictor-default.karl-schriek.svc", "four-predictor-default.karl-schriek.svc.cluster.local"},
    Visibility:"ClusterLocal", HTTP:(*v1alpha1.HTTPIngressRuleValue)(0xc001331460)},
    v1alpha1.IngressRule{Hosts:[]string{"four-predictor-default-karl-schriek.serving.dev-kfserving-test-5.build-mlops-2.com"},
    Visibility:"ExternalIP", HTTP:(*v1alpha1.HTTPIngressRuleValue)(0xc001331480)}},
    HTTPOption:"Redirected", DeprecatedVisibility:""}, Status:v1alpha1.IngressStatus{Status:v1.Status{ObservedGeneration:1,
    Conditions:v1.Conditions{apis.Condition{Type:"LoadBalancerReady", Status:"Unknown",
    Severity:"", LastTransitionTime:apis.VolatileTime{Inner:v1.Time{Time:time.Time{wall:0x0,
    ext:63761677388, loc:(*time.Location)(0x2fad520)}}}, Reason:"Uninitialized", Message:"Waiting
    for load balancer to be ready"}, apis.Condition{Type:"NetworkConfigured", Status:"True",
    Severity:"", LastTransitionTime:apis.VolatileTime{Inner:v1.Time{Time:time.Time{wall:0x0,
    ext:63761677388, loc:(*time.Location)(0x2fad520)}}}, Reason:"", Message:""}, apis.Condition{Type:"Ready",
    Status:"Unknown", Severity:"", LastTransitionTime:apis.VolatileTime{Inner:v1.Time{Time:time.Time{wall:0x0,
    ext:63761677388, loc:(*time.Location)(0x2fad520)}}}, Reason:"Uninitialized", Message:"Waiting
    for load balancer to be ready"}}, Annotations:map[string]string(nil)}, DeprecatedLoadBalancer:(*v1alpha1.LoadBalancerStatus)(nil),
    PublicLoadBalancer:(*v1alpha1.LoadBalancerStatus)(nil), PrivateLoadBalancer:(*v1alpha1.LoadBalancerStatus)(nil)}}'
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: 272bb2e3-97e4-4963-aa55-9f12d62d8103
  knative.dev/key: karl-schriek/four-predictor-default

- severity: INFO
  timestamp: '2021-07-12T09:03:08.396357975Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:185
  message: Creating/Updating VirtualServices
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: 272bb2e3-97e4-4963-aa55-9f12d62d8103
  knative.dev/key: karl-schriek/four-predictor-default

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.396651373Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:559
  message: 'VirtualService four-predictor-default-mesh, status: v1alpha1.IstioStatus{Conditions:[]*v1alpha1.IstioCondition(nil),
    ValidationMessages:[]*v1alpha1.AnalysisMessageBase(nil), ObservedGeneration:0,
    XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}'
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: 272bb2e3-97e4-4963-aa55-9f12d62d8103
  knative.dev/key: karl-schriek/four-predictor-default

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.396666946Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:572
  message: VirtualService four-predictor-default-mesh status is stale; checking again...
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: 272bb2e3-97e4-4963-aa55-9f12d62d8103
  knative.dev/key: karl-schriek/four-predictor-default

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.396674567Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:532
  message: Virtual Service "four-predictor-default-mesh" hasStatus=true, ready=false;
    skipping checks for others.
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: 272bb2e3-97e4-4963-aa55-9f12d62d8103
  knative.dev/key: karl-schriek/four-predictor-default

- severity: INFO
  timestamp: '2021-07-12T09:03:08.396687672Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:234
  message: Ingress successfully synced
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: 272bb2e3-97e4-4963-aa55-9f12d62d8103
  knative.dev/key: karl-schriek/four-predictor-default

- severity: INFO
  timestamp: '2021-07-12T09:03:08.396740599Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: controller/controller.go:539
  message: Reconcile succeeded
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: 272bb2e3-97e4-4963-aa55-9f12d62d8103
  knative.dev/key: karl-schriek/four-predictor-default
  duration: 708.445µs  

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.396789444Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: controller/controller.go:409
  message: 'Adding to queue karl-schriek/four-predictor-default (depth: 1)'
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/key: karl-schriek/four-predictor-default

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.396836282Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: controller/controller.go:502
  message: 'Processing from queue karl-schriek/four-predictor-default (depth: 0)'
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress

- severity: INFO
  timestamp: '2021-07-12T09:03:08.396965591Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:113
  message: 'Reconciling ingress: &v1alpha1.Ingress{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""},
    ObjectMeta:v1.ObjectMeta{Name:"four-predictor-default", GenerateName:"", Namespace:"karl-schriek",
    SelfLink:"", UID:"f19868fa-d358-4062-a700-549c3ca61552", ResourceVersion:"6843048",
    Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63761677388,
    loc:(*time.Location)(0x2fad520)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil),
    Labels:map[string]string{"component":"predictor", "serving.knative.dev/route":"four-predictor-default",
    "serving.knative.dev/routeNamespace":"karl-schriek", "serving.knative.dev/service":"four-predictor-default",
    "serving.kubeflow.org/inferenceservice":"four"}, Annotations:map[string]string{"networking.internal.knative.dev/rollout":"{\"configurations\":[{\"configurationName\":\"four-predictor-default\",\"percent\":100,\"revisions\":[{\"revisionName\":\"four-predictor-default-00001\",\"percent\":100}],\"stepParams\":{}}]}",
    "networking.knative.dev/ingress.class":"istio.ingress.networking.knative.dev",
    "serving.knative.dev/creator":"system:serviceaccount:kubeflow:default", "serving.knative.dev/lastModifier":"system:serviceaccount:kubeflow:default"},
    OwnerReferences:[]v1.OwnerReference{v1.OwnerReference{APIVersion:"serving.knative.dev/v1",
    Kind:"Route", Name:"four-predictor-default", UID:"6cb2e1af-8a94-42ba-b4f9-30dcf4f18087",
    Controller:(*bool)(0xc001333950), BlockOwnerDeletion:(*bool)(0xc001333951)}},
    Finalizers:[]string{"ingresses.networking.internal.knative.dev"}, ClusterName:"",
    ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"controller",
    Operation:"Update", APIVersion:"networking.internal.knative.dev/v1alpha1", Time:(*v1.Time)(0xc0013967c0),
    FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc0013967e0)}}}, Spec:v1alpha1.IngressSpec{TLS:[]v1alpha1.IngressTLS(nil),
    Rules:[]v1alpha1.IngressRule{v1alpha1.IngressRule{Hosts:[]string{"four-predictor-default.karl-schriek",
    "four-predictor-default.karl-schriek.svc", "four-predictor-default.karl-schriek.svc.cluster.local"},
    Visibility:"ClusterLocal", HTTP:(*v1alpha1.HTTPIngressRuleValue)(0xc001396800)},
    v1alpha1.IngressRule{Hosts:[]string{"four-predictor-default-karl-schriek.serving.dev-kfserving-test-5.build-mlops-2.com"},
    Visibility:"ExternalIP", HTTP:(*v1alpha1.HTTPIngressRuleValue)(0xc001396820)}},
    HTTPOption:"Redirected", DeprecatedVisibility:""}, Status:v1alpha1.IngressStatus{Status:v1.Status{ObservedGeneration:1,
    Conditions:v1.Conditions{apis.Condition{Type:"LoadBalancerReady", Status:"Unknown",
    Severity:"", LastTransitionTime:apis.VolatileTime{Inner:v1.Time{Time:time.Time{wall:0x0,
    ext:63761677388, loc:(*time.Location)(0x2fad520)}}}, Reason:"Uninitialized", Message:"Waiting
    for load balancer to be ready"}, apis.Condition{Type:"NetworkConfigured", Status:"True",
    Severity:"", LastTransitionTime:apis.VolatileTime{Inner:v1.Time{Time:time.Time{wall:0x0,
    ext:63761677388, loc:(*time.Location)(0x2fad520)}}}, Reason:"", Message:""}, apis.Condition{Type:"Ready",
    Status:"Unknown", Severity:"", LastTransitionTime:apis.VolatileTime{Inner:v1.Time{Time:time.Time{wall:0x0,
    ext:63761677388, loc:(*time.Location)(0x2fad520)}}}, Reason:"Uninitialized", Message:"Waiting
    for load balancer to be ready"}}, Annotations:map[string]string(nil)}, DeprecatedLoadBalancer:(*v1alpha1.LoadBalancerStatus)(nil),
    PublicLoadBalancer:(*v1alpha1.LoadBalancerStatus)(nil), PrivateLoadBalancer:(*v1alpha1.LoadBalancerStatus)(nil)}}'
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: c3e0fb6e-afb2-43f1-8fc0-4b877d921ac5
  knative.dev/key: karl-schriek/four-predictor-default

- severity: INFO
  timestamp: '2021-07-12T09:03:08.397091574Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:185
  message: Creating/Updating VirtualServices
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: c3e0fb6e-afb2-43f1-8fc0-4b877d921ac5
  knative.dev/key: karl-schriek/four-predictor-default

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.397315698Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:559
  message: 'VirtualService four-predictor-default-mesh, status: v1alpha1.IstioStatus{Conditions:[]*v1alpha1.IstioCondition(nil),
    ValidationMessages:[]*v1alpha1.AnalysisMessageBase(nil), ObservedGeneration:0,
    XXX_NoUnkeyedLiteral:struct {}{}, XXX_unrecognized:[]uint8(nil), XXX_sizecache:0}'
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: c3e0fb6e-afb2-43f1-8fc0-4b877d921ac5
  knative.dev/key: karl-schriek/four-predictor-default

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.397329393Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:572
  message: VirtualService four-predictor-default-mesh status is stale; checking again...
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: c3e0fb6e-afb2-43f1-8fc0-4b877d921ac5
  knative.dev/key: karl-schriek/four-predictor-default

- severity: DEBUG
  timestamp: '2021-07-12T09:03:08.397336517Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:532
  message: Virtual Service "four-predictor-default-mesh" hasStatus=true, ready=false;
    skipping checks for others.
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: c3e0fb6e-afb2-43f1-8fc0-4b877d921ac5
  knative.dev/key: karl-schriek/four-predictor-default

- severity: INFO
  timestamp: '2021-07-12T09:03:08.397348046Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: ingress/ingress.go:234
  message: Ingress successfully synced
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: c3e0fb6e-afb2-43f1-8fc0-4b877d921ac5
  knative.dev/key: karl-schriek/four-predictor-default

- severity: INFO
  timestamp: '2021-07-12T09:03:08.397380042Z'
  logger: net-istio-controller.istio-ingress-controller
  caller: controller/controller.go:539
  message: Reconcile succeeded
  commit: 2b5e707
  knative.dev/controller: knative.dev.net-istio.pkg.reconciler.ingress.Reconciler
  knative.dev/kind: networking.internal.knative.dev.Ingress
  knative.dev/traceid: c3e0fb6e-afb2-43f1-8fc0-4b877d921ac5
  knative.dev/key: karl-schriek/four-predictor-default
  duration: 530.712µs
karlschriek commented 3 years ago

So, I realise this reads a lot like a personal debugging log, but the point is also to create a trail of thought for the next person who has these questions.

Following on from the previous comment, this statement was a bit confusing since it suggested that the VS had a status, but that it wasn't ready.

 Virtual Service "four-predictor-default-mesh" hasStatus=true, ready=false;
    skipping checks for others

In fact the VS had no status and this needed to be enabled as per https://istio.io/latest/docs/reference/config/config-status/. We added this in our IstioOperator spec and now have a working solution. So to summarize, we needed a config-istio ConfigMap that looks like this:

apiVersion: v1
kind: ConfigMap
metadata:
  name: config-istio
  namespace: knative-serving
data:
  enable-virtualservice-status: 'true'
  ...

And an IstioOperator spec that looks like this:

apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
metadata:
  namespace: istio-system
  name: istio
spec:
  values:
    global:
      istiod:
        enableAnalysis: true
        ...
    pilot:
      env:
        PILOT_ENABLE_STATUS: true
        ...

This allows the net-istio-controller to query the status of the load balancer directly from the VirtualService instead of needing to probe the Pod in the background.

If someone would like to close this feel free. I am leaving it open for discussion for now. Given that Istio's "Configuration Status" feature is regarded as Alpha at the moment we would still probably prefer to have a solution that works with probes.

github-actions[bot] commented 3 years ago

This issue is stale because it has been open for 90 days with no activity. It will automatically close after 30 more days of inactivity. Reopen the issue with /reopen. Mark the issue as fresh by adding the comment /remove-lifecycle stale.

wSedlacek commented 7 months ago

If anyone is seeing an error that looks like this

"severity":"ERROR","timestamp":"2024-04-02T19:22:43.302416456Z","logger":"net-istio-controller.istio-ingress-controller","caller":"status/status.go:405","message":"Probing of http://account-queries.rocket-default.sakyo.io:80 failed, IP: 10.0.14.106:80, ready: false, error: error roundtripping http://account-queries.rocket-default.sakyo.io:80/healthz: context deadline exceeded (depth: 0)","commit":"916ac51-dirty","knative.dev/controller":"istio-ingress-controller","knative.dev/controller":"knative.dev.net-istio.pkg.reconciler.ingress.Reconciler","knative.dev/kind":"networking.internal.knative.dev.Ingress","knative.dev/traceid":"f71ec13e-5995-4512-ac39-887b2f7dd698","knative.dev/key":"rocket-default/account-queries","stacktrace":"knative.dev/networking/pkg/status.(*Prober).processWorkItem\n\tknative.dev/networking@v0.0.0-20231017124814-2a7676e912b7/pkg/status/status.go:405\nknative.dev/networking/pkg/status.(*Prober).Start.func1\n\tknative.dev/networking@v0.0.0-20231017124814-2a7676e912b7/pkg/status/status.go:290"}

The error is created here: https://github.com/knative/networking/blob/2a7676e912b7a0bed4a99c496c5e0702d0192789/pkg/status/status.go#L405-L406

Typically the prober is pretty generous, allowing basically any type of response even 302s, or 404s etc. But it will fail if the network request fails or timesout. In my case I needed a network security policy on my nodes to allow them to use port 80 for node to node.