It is normal for the the Watcher container to allocate more memory, when there's a heavy load. But it is expected the the memory consumption should go back to a threshold level, when there's no workload.
Actual Behaviour
The Watcher container memory consumption gradually increases with workload, but when there's no workload, the garbage collector reclaims only some of then memory. So the in use container memory gradually keep increasing over time until OOM killed when it crosses memory limits.
Steps to Reproduce the Problem
Install tekton results with "logs API" enabled
Run some pipeline within a short period of time, to put some load on the controller
Additional Info
This screenshots from Grafana displays the API watcher memory utilisation over time, which start at a very low value (when the pod starts after 01/06 15:00) and then gradually increases with load. But the when there's no workload, only a small amount of memory is reclaimed.
This screenshot display the items in the controller queue at that time. Also an estimate of load on the watcher.
Expected Behaviour
It is normal for the the Watcher container to allocate more memory, when there's a heavy load. But it is expected the the memory consumption should go back to a threshold level, when there's no workload.
Actual Behaviour
The Watcher container memory consumption gradually increases with workload, but when there's no workload, the garbage collector reclaims only some of then memory. So the in use container memory gradually keep increasing over time until OOM killed when it crosses memory limits.
Steps to Reproduce the Problem
Additional Info
This screenshots from Grafana displays the API watcher memory utilisation over time, which start at a very low value (when the pod starts after 01/06 15:00) and then gradually increases with load. But the when there's no workload, only a small amount of memory is reclaimed.
This screenshot display the items in the controller queue at that time. Also an estimate of load on the watcher.