kubernetes-sigs / descheduler

Descheduler for Kubernetes
https://sigs.k8s.io/descheduler
Apache License 2.0
4.23k stars 645 forks source link

Concurrency issues with the EvictPod method #1388

Closed googs1025 closed 2 months ago

googs1025 commented 2 months ago

Does EvictPod need to be locked to control concurrency?

https://github.com/kubernetes-sigs/descheduler/blob/master/pkg/descheduler/evictions/evictions.go#L157

// pkg/descheduler/evictions/evictions.go
// EvictPod evicts a pod while exercising eviction limits.
// Returns true when the pod is evicted on the server side.
func (pe *PodEvictor) EvictPod(ctx context.Context, pod *v1.Pod, opts EvictOptions) bool {
    var span trace.Span
    ctx, span = tracing.Tracer().Start(ctx, "EvictPod", trace.WithAttributes(attribute.String("podName", pod.Name), attribute.String("podNamespace", pod.Namespace), attribute.String("reason", opts.Reason), attribute.String("operation", tracing.EvictOperation)))
    defer span.End()
    ...
        // Is there a need to lock here?
    if pod.Spec.NodeName != "" {
        pe.nodepodCount[pod.Spec.NodeName]++
    }
    pe.namespacePodCount[pod.Namespace]++

    if pe.metricsEnabled {
        metrics.PodsEvicted.With(map[string]string{"result": "success", "strategy": opts.StrategyName, "namespace": pod.Namespace, "node": pod.Spec.NodeName, "profile": opts.ProfileName}).Inc()
    }

    ...
    return true
}
googs1025 commented 2 months ago

I think that in a controller pattern, we should use locks to control concurrency. If necessary, I will submit a pull request to make the modifications. If not needed, I will close this issue.

damemi commented 2 months ago

@googs1025 each plugin is run synchronously, along with each EvictPod request, so I don't think locks are necessary (unless you are referring to some sort of external lock implementation for running multiple descheduler instances?)

Are you currently hitting any concurrency issues with descheduler?

googs1025 commented 2 months ago

@googs1025 each plugin is run synchronously, along with each EvictPod request, so I don't think locks are necessary (unless you are referring to some sort of external lock implementation for running multiple descheduler instances?)

Are you currently hitting any concurrency issues with descheduler?

thank you for reply! No, I was just feeling a bit confused while reviewing the source code. I'm not sure if there is a multi-instance implementation, but typically, if there are multiple instances or concurrency involved, locks or the atomic package are commonly used. The Evict method in the descheduler also follows the Kubernetes framework, right? If so, it should not involve any concurrency issues.

damemi commented 2 months ago

The Evict method in the descheduler also follows the Kubernetes framework, right? If so, it should not involve any concurrency issues.

Yes, this just calls the kubernetes API so we don't need to worry about concurrency there

googs1025 commented 2 months ago

ok, I'll closed this issue.

googs1025 commented 2 months ago

/close

k8s-ci-robot commented 2 months ago

@googs1025: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/descheduler/issues/1388#issuecomment-2090463676): >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.