Kong / kong-operator

Kong Operator for Kubernetes and OpenShift
https://konghq.com
Apache License 2.0
58 stars 27 forks source link

transient failure of Kong CR update during reconciliation #32

Closed mflendrich closed 2 years ago

mflendrich commented 4 years ago

When using kong-operator v0.3.0 managed by OLM, after I kubectl apply an arbitrary Kong CR, kong-controller successfully installs a helm release, but shows an error in the logs which suggests a retry happening under the hood:

{"level":"error","ts":1596405725.9677854,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"kong-controller","request":"default/example-kong","error":"Operation cannot be fulfilled on kongs.charts.helm.k8s.io \"example-kong\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tpkg/mod/github.com/go-logr/zapr@v0.1.1/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\tpkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tpkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\tpkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:88"}

Reconciliation apparently succeeds on a retry after several seconds. I suspect that this may be a bug on the operator-sdk side. The bug does not seem to affect users significantly beyond a spurious error message in the logs.

Reproduction steps:

  1. Install OLM 0.15.1
  2. Install kong-operator v0.3.0 via OLM
  3. Apply the following manifest:
    kubectl create -f - <<EOF
    apiVersion: charts.helm.k8s.io/v1alpha1
    kind: Kong
    metadata:
      name: example-kong
    spec:
      proxy:
        type: NodePort
      env:
        prefix: /kong_prefix/
      resources:
        limits:
          cpu: 500m
          memory: 2G
        requests:
          cpu: 100m
          memory: 512Mi
      ingressController:
        enabled: true
        ingressClass: example-ingress-class
        installCRDs: false
    EOF
  4. Observe the "level":"error" in the kong-operator logs, and a subsequent success:
    {"level":"info","ts":1596405724.8813179,"logger":"helm.controller","msg":"Installed release","namespace":"default","name":"example-kong","apiVersion":"charts.helm.k8s.io/v1alpha1","kind":"Kong","release":"example-kong"}
    ( ... rich diff displayed here ... )
    E0802 22:02:05.353609       1 memcache.go:199] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
    E0802 22:02:05.803880       1 memcache.go:111] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
    {"level":"info","ts":1596405725.9652016,"logger":"helm.controller","msg":"Reconciled release","namespace":"default","name":"example-kong","apiVersion":"charts.helm.k8s.io/v1alpha1","kind":"Kong","release":"example-kong"}
    {"level":"error","ts":1596405725.9677854,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"kong-controller","request":"default/example-kong","error":"Operation cannot be fulfilled on kongs.charts.helm.k8s.io \"example-kong\": the object has been modified; please apply your changes to the latest version and try again","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\tpkg/mod/github.com/go-logr/zapr@v0.1.1/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\tpkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\tpkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\tpkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\tpkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:88"}
    E0802 22:02:07.361056       1 memcache.go:199] couldn't get resource list for packages.operators.coreos.com/v1: the server is currently unable to handle the request
    {"level":"info","ts":1596405727.99552,"logger":"helm.controller","msg":"Reconciled release","namespace":"default","name":"example-kong","apiVersion":"charts.helm.k8s.io/v1alpha1","kind":"Kong","release":"example-kong"}
stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

shaneutt commented 2 years ago

This issue is being closed due to our recent deprecation of this repository. Having realized that Helm-based operators were too limiting for some of the things we wanted to do, we are now working on a Golang-based Operator to replace it and we encourage you to star and watch that repository to track our progress going forward. If you have questions or want to get in touch with us, please feel free to drop a message in the #kong channel on Kubernetes Slack.