kubernetes / client-go

Go client for Kubernetes.
Apache License 2.0
8.78k stars 2.9k forks source link

Conflict running on different cluster versions: failed to list *v2beta2.HorizontalPodAutoscaler: the server could not find the requested resource #1275

Closed dcfranca closed 3 months ago

dcfranca commented 1 year ago

Hello all,

I'm not sure if this is the right place, maybe there is a simple solution, but I have been struggling with this use case, so I'll post it here and I hope someone point me to the right direction

I have an operator (based on KNative eventing) that needs to run in 2 different versions of Kubernetes (1.19 and 1.27) Those operators create HPA resources

These are the apiversions of autoscaling on each cluster:

1.19

autoscaling.k8s.io/v1
autoscaling.k8s.io/v1beta2
autoscaling/v1
autoscaling/v2beta1
autoscaling/v2beta2

1.27

autoscaling.k8s.io/v1
autoscaling.k8s.io/v1beta2
autoscaling/v1
autoscaling/v2

Until now, we only had 1.19, and it was working fine creating HPA v2beta2, but as you can see the new cluster doesn't have autoscaling v2beta2, so we need to migrate it to either v1 or v2 v1 is not an option as it doesn't support scaling with Memory

v2 is not available on the 1.19 cluster, so first I tried using a code similar to this:

        import (
       ...
        autoscalingv2listers "k8s.io/client-go/listers/autoscaling/v2"
    autoscalingv2beta2listers "k8s.io/client-go/listers/autoscaling/v2beta2"
         )

       type Reconciler struct {
        ...
    hpaListerv2beta2   autoscalingv2beta2listers.HorizontalPodAutoscalerLister
    hpaListerv2        autoscalingv2listers.HorizontalPodAutoscalerLister
       }       

        ...
    var hpaListerV2 autoscalingv2listers.HorizontalPodAutoscalerLister
    var hpaListerV2beta2 autoscalingv2beta2listers.HorizontalPodAutoscalerLister

    if shared.IsApiVersionSupported(clientSet, "autoscaling", "v2") {
        hpaListerV2 = hpainformerv2.Get(ctx).Lister()
    } else {
        hpaListerV2beta2 = hpainformerv2beta2.Get(ctx).Lister()
    }

    reconciler := &Reconciler{
        hpaListerv2beta2: hpaListerV2beta2,
        hpaListerv2:      hpaListerV2,
    }

EDIT: I also have added the conditional when adding the event handler

    if shared.IsApiVersionSupported(clientSet, "autoscaling", "v2") {
        hpainformerv2.Get(ctx).Informer().AddEventHandler(cache.FilteringResourceEventHandler{
            FilterFunc: controller.FilterControllerGK(eventingv1.Kind("Broker")),
            Handler:    controller.HandleAll(impl.EnqueueControllerOf),
        })
    } else {
        hpainformerv2beta2.Get(ctx).Informer().AddEventHandler(cache.FilteringResourceEventHandler{
            FilterFunc: controller.FilterControllerGK(eventingv1.Kind("Broker")),
            Handler:    controller.HandleAll(impl.EnqueueControllerOf),
        })
    }

It doesn't work as expected and even tho I have a conditional it tries to watch the resource not available on the cluster, throwing the errors:

W0627 10:03:58.137362       1 reflector.go:533] knative.dev/pkg/controller/controller.go:732: failed to list *v2beta2.HorizontalPodAutoscaler: the server could not find the requested resource
E0627 10:03:58.137387       1 reflector.go:148] knative.dev/pkg/controller/controller.go:732: Failed to watch *v2beta2.HorizontalPodAutoscaler: failed to list *v2beta2.HorizontalPodAutoscaler: the server could not find the requested resource
error: http2: client connection lost

If we look at the line mentioned there controller.go:732 it is the line that calls the Run method of the Informers

func StartInformers(stopCh <-chan struct{}, informers ...Informer) error {
    for _, informer := range informers {
        informer := informer
        go informer.Run(stopCh) // Here
    }

    for i, informer := range informers {
        if ok := cache.WaitForCacheSync(stopCh, informer.HasSynced); !ok {
            return fmt.Errorf("failed to wait for cache at index %d to sync", i)
        }
    }
    return nil
}

I have tried using an interface, and also using generics, but didn't go far, always ended up on some sort of limitation I have commented out the version not available in the cluster and then I was able to run it successful, but of course, this is not ideal

Any idea how to achieve this?

k8s-triage-robot commented 5 months ago

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot commented 4 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot commented 3 months ago

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

You can:

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot commented 3 months ago

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to [this](https://github.com/kubernetes/client-go/issues/1275#issuecomment-2016463864): >The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. > >This bot triages issues according to the following rules: >- After 90d of inactivity, `lifecycle/stale` is applied >- After 30d of inactivity since `lifecycle/stale` was applied, `lifecycle/rotten` is applied >- After 30d of inactivity since `lifecycle/rotten` was applied, the issue is closed > >You can: >- Reopen this issue with `/reopen` >- Mark this issue as fresh with `/remove-lifecycle rotten` >- Offer to help out with [Issue Triage][1] > >Please send feedback to sig-contributor-experience at [kubernetes/community](https://github.com/kubernetes/community). > >/close not-planned > >[1]: https://www.kubernetes.dev/docs/guide/issue-triage/ Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.
vedant15188 commented 2 months ago

/reopen

k8s-ci-robot commented 2 months ago

@vedant15188: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to [this](https://github.com/kubernetes/client-go/issues/1275#issuecomment-2042682108): >/reopen Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.