Closed aogier closed 2 years ago
If I understand well from source code. Lower bound and target is not just simple value from "args".
podMinMemoryMb = flag.Float64("pod-recommendation-min-memory-mb", 250, `Minimum memory recommendation for a pod`) //your value from args
func (r *podResourceRecommender) GetRecommendedPodResources(containerNameToAggregateStateMap model.ContainerNameToAggregateStateMap) RecommendedPodResources {
var recommendation = make(RecommendedPodResources)
if len(containerNameToAggregateStateMap) == 0 { //if there is no previous state of container then return only what you gived in args or defaults else go to "fraction" variable
return recommendation
}
fraction := 1.0 / float64(len(containerNameToAggregateStateMap))
minResources := model.Resources{
model.ResourceCPU: model.ScaleResource(model.CPUAmountFromCores(*podMinCPUMillicores*0.001), fraction),
model.ResourceMemory: model.ScaleResource(model.MemoryAmountFromBytes(*podMinMemoryMb*1024*1024), fraction), //this line is important.
You can see than there is used only fraction of what you gived in args but I don't understand how does it work
}
recommender := &podResourceRecommender{
WithMinResources(minResources, r.targetEstimator),
WithMinResources(minResources, r.lowerBoundEstimator),
WithMinResources(minResources, r.upperBoundEstimator),
}
for containerName, aggregatedContainerState := range containerNameToAggregateStateMap {
recommendation[containerName] = recommender.estimateContainerResources(aggregatedContainerState)
}
return recommendation
}
given that tools like goldilocks seems to take those values verbatim, what's the correct way of interpreting them, given that the observed pods never reached such usage?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Hello, I'm currently evaluating VPA so I installed version 0.10.0 and I'm using recommender and updater components on an EKS 1.21 w/ metrics server 0.6.1 previously installed.
What I've tried to do is to configure a non-updating VPA against a random deployment:
recommender seems ok with it, it logs activities and no complaints:
However I'm not sure about the resulting recommended MEM:
given the recommender has been launched with this params:
If I understand VPA fields correctly, I'd expect to at least read 100M under
lower bound
, and something smaller than 1.6G intarget
, given that this pod never ever reached such a big memory usage. What I'm missing? Many thanks in advance, regards