Closed jgiles closed 6 years ago
In general, 50 mb is not enough, because the API server does not provide paging. So, the dashboard backend always reads all data in the backend and is paging on client side. Typically, 300 mb is configured:
For a large cluster, it is best to increase memory limits for Dashboard. API server does not support pagination and we do have to load all resources into the memory.
Environment
We install Dashboard using https://github.com/kubernetes/charts/tree/master/stable/kubernetes-dashboard.
Steps to reproduce
Observed result
Dashboard container is killed for memory limit violation.
Dashboard logs:
(logs end there, the container exits)
Relevant journalctl logs from the node:
Expected result
No crash.
Comments
Our obvious short-term solution is to give Dashboard more memory. However, it looks like Dashboard might be reading the contents of many (all?) config maps into memory at once, based on tracing the code from the last log statement above:
https://github.com/kubernetes/dashboard/blob/2bedc9eb4be9cf6c92125cb37638dff032d7dabf/src/app/backend/resource/config/config.go#L43
This seems likely to interact very poorly with Helm, since Helm creates config maps to store data about each release. For example, running against the cluster in question:
(we've only been operating in this cluster for a few weeks, so the problem is likely to get steadily worse)
We will also take steps to limit the number of artifacts Helm leaves around, but it seems like Dashboard shouldn't be trying to pull them all down at once.