When deploying peers the manager is killed in GKE with an OOM. It is not restarted successfully. GKE in auto pilot mode is only considering the resource limit of the deployment. Here the 128Mi are not sufficient.
```console
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
gk3-hlf-default-pool-xxx Ready 13h v1.27.3-gke.100 10.128.0.54 XXX Container-Optimized OS from Google 5.15.109+ containerd://1.7.0
gk3-hlf-pool-1-xxx Ready 20h v1.27.3-gke.100 10.128.0.51 XXX Container-Optimized OS from Google 5.15.109+ containerd://1.7.0
gk3-hlf-pool-1-xxx Ready 22h v1.27.3-gke.100 10.128.0.50 XXX Container-Optimized OS from Google 5.15.109+ containerd://1.7.0
gk3-hlf-pool-1-xxx Ready 84m v1.27.3-gke.100 10.128.0.59 XXX Container-Optimized OS from Google 5.15.109+ containerd://1.7.0
gk3-hlf-pool-1-xxx Ready 13h v1.27.3-gke.100 10.128.0.53 XXX Container-Optimized OS from Google 5.15.109+ containerd://1.7.0
```
What happened?
When deploying peers the manager is killed in GKE with an OOM. It is not restarted successfully. GKE in auto pilot mode is only considering the resource limit of the deployment. Here the 128Mi are not sufficient.
I had to install the operator with:
But maybe the defaults could be increased?
What did you expect to happen?
The manager survives and does not crash
How can we reproduce it (as minimally and precisely as possible)?
Anything else we need to know?
The value used in https://github.com/hyperledger/bevel-operator-fabric/blob/8aa3a7741499c126b9004d9dc5a8e2f54a44333c/chart/hlf-operator/templates/deployment.yaml#L64 is 128Mi and in the deployment the limit also seems to be too low. I have set it to 1Gi and the manager is not crashing anymore.
Kubernetes version