vmware-archive / octant

Highly extensible platform for developers to better understand the complexity of Kubernetes clusters.
https://octant.dev
Apache License 2.0
6.28k stars 483 forks source link

ERROR api/content_manager.go:158 generate content {"client-id": "", "err": "generate content: generate view for CRD \"blockdeviceclaims.openebs.io\" version \"v1alpha1\": unable to get Lister for openebs.io/v1alpha1, Resource=blockdeviceclaims, watcher was unable to start", "content-path": "overview/namespace/pulsar"} #2626

Open archenroot opened 3 years ago

archenroot commented 3 years ago

What steps did you take and what happened: I just install my 3 node cluster (1 master and 2 workers) v1.21 using vagrant libvirt on qemu locally. I install octant via brew.

Additionally I use openebs local pv provider, which probably is causing this issue across all namespaces.

So when I click Namespace overview and I pickup any of provided namespaces I always get this: 2021-07-10T09:09:11.127+0200 ERROR api/content_manager.go:158 generate content {"client-id": "f7863dc2-e0dc-11eb-8690-0cc47a49e12f", "err": "generate content: generate view for CRD \"blockdeviceclaims.openebs.io\" version \"v1alpha1\": unable to get Lister for openebs.io/v1alpha1, Resource=blockdeviceclaims, watcher was unable to start", "content-path": "overview/namespace/pulsar"}

So I look into pulsar space, but openebs cause issue.

And when I go to cluster overview to storage classes and I go to openebs-device I get in resource viewer same error as above: 2021-07-10T09:09:11.127+0200 ERROR api/content_manager.go:158 generate content {"client-id": "f7863dc2-e0dc-11eb-8690-0cc47a49e12f", "err": "generate content: generate view for CRD \"blockdeviceclaims.openebs.io\" version \"v1alpha1\": unable to get Lister for openebs.io/v1alpha1, Resource=blockdeviceclaims, watcher was unable to start", "content-path": "overview/namespace/pulsar"} This error prevents me to see any resources in any namespace across whole octant system. Maybe it will be good to parse yaml config data in small slots instead of globally. Here is example of resource viewer in pulsar namespace for pulsar-dev-grafana pod: create resource viewer: unable to visit /v1, Kind=Pod pulsar-dev-grafana-5cfdf4cf-x7ftd: error unable to visit object </v1, Kind=Pod pulsar-dev-grafana-5cfdf4cf-x7ftd>: pod </v1, Kind=Pod pulsar-dev-grafana-5cfdf4cf-x7ftd> visit service account </v1, Kind=ServiceAccount default>: find children: unable to retrieve CacheKey[Namespace='pulsar', APIVersion='openebs.io/v1alpha1', Kind='BlockDeviceClaim']: unable to get Lister for openebs.io/v1alpha1, Resource=blockdeviceclaims, watcher was unable to start

When I undeploy all my current components: apache pulsar, minio and openebs local pv provisioned these error disappears and I can view my again namespace overview.

There remained only another error on completely clear cluster: W0710 09:29:59.573275 22094 reflector.go:424] github.com/vmware-tanzu/octant/internal/objectstore/dynamic_cache.go:389: watch of *unstructured.Unstructured ended with: an error on the server ("unable to decode an event from the watch stream: unable to decode to metav1.Event") has prevented the request from succeeding

What did you expect to happen: no errors related to decoding kubernetes plugin components... Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

Environment:

- Kubernetes version (use `kubectl version`):
archenroot commented 3 years ago

Similar issue observed with minio s3 storage components: sdff

wwitzel3 commented 3 years ago

Hi @archenroot thank you for reporting this, we just landed a fix for this in #2587 which should allow the rest of the resources to load properly even when encountering a CRD in error. This will be part of our up coming 0.22 release and should be available for testing in our nightly build already.

wwitzel3 commented 3 years ago

Hi @archenroot , Did you get a chance to try out Octant 0.22 with this cluster? I'm wondering if you are getting the same error, a different error, or if things are working better for you now?