Open SergeyKanzhelev opened 1 year ago
/triage accepted /assign @matthyx
On the surface this seems reasonable:
A couple of points:
Today we calculate changes needed to be performed on Pod including the next init container to run. Then we apply these actions and update the Status of a Pod on API server using
convertStatusToAPIStatus
fromgenerateAPIPodStatus
. Once we converted the runtime status to API status, we also callkl.probeManager.UpdatePodStatus(pod.UID, s)
passing the API status object to be updated with the probes results. Thus the probe results are not visible to thefindNextInitContainerToRun
function.
After digging into this code area, I found that findNextInitContainersToRun
is called by (*kubeGenericRuntimeManager).computePodActions
and *kubeGenericRuntimeManager
has its own probe managers. (It already has been using those probe managers in computePodActions
.)
The proposal is to refactor the
prober_manager.go
to operate onkubecontainer.PodStatus
instead ofv1.PodStatus
. This will require adding the probes status to thekubecontainer.PodStatus
. We will also move the call into theprober_manager.go
early in thesyncPod
execution so we know the startup probes result before we call intofindNextInitContainerToRun
.
IMHO, one alternative could be that let kuberuntime.PodStatus
be as it is and use those probe managers in findNextInitContainersToRun
.
@matthyx @SergeyKanzhelev What do you think?
What do you think?
it might work, but here I have implemented your other idea https://github.com/SergeyKanzhelev/kubernetes/pull/3
This issue is labeled with priority/important-soon
but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.
You can:
/triage accepted
(org members only)/priority important-longterm
or /priority backlog
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
This one will be worked on after Sidecars are in GA.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
still waiting for sidecars GA (1.32) /remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
In sidecar container KEP we plan to use the Startup probes state as a way to decide whether the next Init container need to start being initialized. Before this KEP probes results were not used in any of Pod lifecycle decisions. In order to implement this we need to have a probe status at a time we call
findNextInitContainerToRun
. This may need some refactoring on how prober_manager is initialized and used.Today we calculate changes needed to be performed on Pod including the next init container to run. Then we apply these actions and update the Status of a Pod on API server using
convertStatusToAPIStatus
fromgenerateAPIPodStatus
. Once we converted the runtime status to API status, we also callkl.probeManager.UpdatePodStatus(pod.UID, s)
passing the API status object to be updated with the probes results. Thus the probe results are not visible to thefindNextInitContainerToRun
function.The proposal is to refactor the
prober_manager.go
to operate onkubecontainer.PodStatus
instead ofv1.PodStatus
. This will require adding the probes status to thekubecontainer.PodStatus
. We will also move the call into theprober_manager.go
early in thesyncPod
execution so we know the startup probes result before we call intofindNextInitContainerToRun
.cc @smarterclayton
/sig node /kind feature /priority important-soon