Closed 283713406 closed 2 months ago
cc @swatisehgal @ffromani @Tal-or @PiotrProkop
One of the key assumptions we took when designing the NodeResourceTopology (NRT) plugin is that the kubelet config of the worker node changes VERY rarely, if at all, during the cluster lifetime. As rule of thumb, it was expected to change with a frequency of like once every quarter (3 months) or so, and likely less often. So the event of changing during a scheduling cycle was deemed extremely low.
Recently we changed NFD to detect kubelet config changes and update the NRT objects accordingly, but the intent of that change as I see it was to avoid the extra maintenance burden of having to stop the NFD topology updater, clean or delete the NRT objects, restart the updater, rather than enable the system to tolerate a more dynamic environment.
That said, I'll review carefully the issue and its related PR and get back.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/reopen
@ffromani: Reopened this issue.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/remove-lifecycle stale
/reopen
@ffromani: Reopened this issue.
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
Area
Other components
No response
What happened?
func (ov OverReserve) GetCachedNRTCopy(nodeName string, pod corev1.Pod) (*topologyv1alpha2.NodeResourceTopology, bool) { ov.lock.Lock() defer ov.lock.Unlock() if ov.nodesWithForeignPods.IsSet(nodeName) { return nil, false }
}
When obtaining NRTs from the cache during the filter and score stages, it is assumed that there are changes in the topologyManagerPolicy and topologyManagerScope in the computational node/var/lib/kubelet/config.yaml. Then the topologyPolicies in the NRT resources reported by NFD also changed. However, the cache will not be updated immediately, as the condition for cache updates is that NodesMaybeOverReserved can return nodes. At this point, the filter stage will enter the wrong function based on the cache configuration. For example, if there is a computing node node1, and the topologyManagerPolicy in the NRT cache is singleNUMPodLevel, but the topologyManagerScope in config.yaml is modified to Container, the filter stage will still enter the singleNUMPodLevelHandler function for operation instead of the singleNUMAContainerLevelHandler.
What did you expect to happen?
There should be a way to obtain the latest topologyPolicies when the filter and score stages retrieve cached NRT data, avoiding entering incorrect functions for operations
How can we reproduce it (as minimally and precisely as possible)?
No response
Anything else we need to know?
No response
Kubernetes version
[root@master1 ~]# kubectl version WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version. Client Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.13", GitCommit:"49433308be5b958856b6949df02b716e0a7cf0a3", GitTreeState:"clean", BuildDate:"2023-04-12T12:15:50Z", GoVersion:"go1.19.8", Compiler:"gc", Platform:"linux/arm64"} Kustomize Version: v4.5.4 Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.13", GitCommit:"49433308be5b958856b6949df02b716e0a7cf0a3", GitTreeState:"clean", BuildDate:"2023-04-12T12:08:36Z", GoVersion:"go1.19.8", Compiler:"gc", Platform:"linux/arm64"}
Scheduler Plugins version
master