Open r4rajat opened 4 months ago
@r4rajat Could you specify what version (and SHA) you are referring to for the respective upstream and downstream latest tags?
Also, it would be super helpful if you could run a profiler and let us know what is actually consuming the memory. Due to a shortage of active maintainers, its maybe really difficult for us to pick this up. Any help in solving this would be greatly appreciated!
Issues go stale after 90d of inactivity.
Mark the issue as fresh by commenting /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Exclude this issue from closing by commenting /lifecycle frozen
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Bug Report
What did you do?
I am creating an helm based operator for Redhat Openshift. Earlier I was using the downstream image i.e![image](https://github.com/operator-framework/operator-sdk/assets/37516416/3a0233da-cf63-40fe-b5e5-13e951f94671)
registry.redhat.io/openshift4/ose-helm-operator:latest
as my base image and myoperator-controller-manager
deployment was taking around 0.06-0.08 cores,Then I updated my base image to the upstream image i.e![image](https://github.com/operator-framework/operator-sdk/assets/37516416/32ab315c-92d5-4cbb-b876-dbdd876b6a3e)
quay.io/operator-framework/helm-operator:latest
and the core usage for the sameoperator-controller-manager
increased very much to like 0.8-0.9 cores.What did you expect to see?
Usual core usage around 0.05-0.06
What did you see instead? Under which circumstances?
Very high core usage around 1
Environment
Operator type:
/language helm
Kubernetes cluster type:
OpenShift v4.13.4
$ operator-sdk version
$ go version
(if language is Go)$ kubectl version
Possible Solution
Additional context