Closed lduparc closed 1 year ago
Interesting, it's taken from cAdvisor stats:
I need to dig into it to understand why that's different from the pod compute resources.
Hi,
Thinking about this issue. FYI I'm using kubernetes 1.19 and I saw some breaing changes between kube-state-metrics & cadvisor which can explain (not sure) why we don't have the right information on Kubebox
It's not related to kubernetes 1.19. I check kubebox version 0.7.0 and all is ok. This issue exist since version 0.8.0. Issue still exist on v0.9.0
Thanks
Thanks for the precision. That helps a lot. Before 0.8.0, the cAdvisor embedded into the kubelet was queried through the container stats API. Starting 0.8.0, it uses the APIs from the external cAdvisor DaemonSet.
Here is what was used from the container stats endpoint response:
While the following bit from the cAdvisor API doesn't seem to match:
Hi, Hope end of year are good for you.
Any news concerning this issue ? Are you able to reproduce ?
Thanks.
@lduparc I don't have any update. I've mostly been AFK to enjoy end of 2020 😄. I'll work on it ASAP and keep you posted.
Happy New Year,
Let me know if you need help to test.
I've removed the limits from the time series in 0448a18e9e6acabba004d44fd1d0625027ba48ed, as they tend to flatten the other ones. I may find a better way to bring the limits in the UI.
Running Kubebox (latest version)
CPU LImit (red line) display CPU request config and not real limit set in Deployment config. Checking kubernetes dashboard, cpu is well set. All ok concerning memory.
Eg:
Screenshot from Kubebox:
Screenshot from Kubernetes dashboard: