Open Nicola-Sergio opened 1 month ago
You are using Flux v0.41.2 which reached end-of-life almost 2 years ago. Upgrade to Flux 2.5 and if the problem persists report it here, but on that version no one can help you.
Ok, I will update it as soon as possible. Could you help me at this point if is possible?
Would it be possible to estimate the RAM usage of the source-controller in my case, similar to what @stefanprodan explained here
After you upgrade to Flux 2.5, configure the Helm index caching and with the default 1GB RAM limit it should work fine.
Docs here: https://fluxcd.io/flux/installation/configuration/vertical-scaling/#enable-helm-repositories-caching
Does it will work even whether helm charts are in GitRepository rather than HelmRepository?
Does it will work even whether helm charts are in GitRepository rather than HelmRepository?
I don't think the OOM is related to the Git operations but to Helm.
Each of these Kustomization resources has a spec.interval set to 1 minute, so changes are pulled frequently.
The Kustomization interval has nothing to do with the Git pull frequency, see the recommend settings here: https://fluxcd.io/flux/components/kustomize/kustomizations/#recommended-settings
Describe the bug
Hi everyone,
I'm observing an issue where the
source-controller
starts in a healthy state (1/1 Running
), but after an initialOOMKilled event
, it enters a loop where Kubernetes continuously creates new pods that are almost immediately Evicted.Over time, this leads to a large number of failed source-controller pods accumulating in the flux-system namespace.
The situation is the following:
Steps to reproduce
I'm running a single AKS cluster which hosts three separate development environments, each for a different project.
Each project is managed via its own Git repository, and I've structured Flux in the following way:
I have one main Git repository that manages the resources for Project 1.
Inside this repo, I have defined:
GitRepository
resources, each pointing to the Git repo of Project 2 and Project 3.Kustomization
resources, one for each of thoseGitRepository
objects.Each of these
Kustomization
resources has aspec.interval
set to 1 minute, so changes are pulled frequently.kubectl get helmrelease --all-namespaces:
Are all 18 helmreleases
flux stats:
Would it be possible to estimate the RAM usage of the source-controller in my case, similar to what @stefanprodan explained here?
Expected behavior
None
Screenshots and recordings
No response
OS / Distro
Ubuntu 22.04.3 LTS
Flux version
v0.41.2
Flux check
► checking prerequisites ✗ flux 0.41.2 <2.5.1 (new version is available, please upgrade) ✔ Kubernetes 1.28.3 >=1.20.6-0 ► checking controllers ✔ helm-controller: deployment ready ► ghcr.io/fluxcd/helm-controller:v0.30.0 ✔ image-automation-controller: deployment ready ► ghcr.io/fluxcd/image-automation-controller:v0.30.0 ✔ image-reflector-controller: deployment ready ► ghcr.io/fluxcd/image-reflector-controller:v0.25.0 ✔ kustomize-controller: deployment ready ► ghcr.io/fluxcd/kustomize-controller:v0.34.0 ✔ notification-controller: deployment ready ► ghcr.io/fluxcd/notification-controller:v0.32.1 ✗ source-controller: deployment not ready ► ghcr.io/fluxcd/source-controller:v0.35.2 ► checking crds ✔ alerts.notification.toolkit.fluxcd.io/v1beta2 ✔ buckets.source.toolkit.fluxcd.io/v1beta2 ✔ gitrepositories.source.toolkit.fluxcd.io/v1beta2 ✔ helmcharts.source.toolkit.fluxcd.io/v1beta2 ✔ helmreleases.helm.toolkit.fluxcd.io/v2beta1 ✔ helmrepositories.source.toolkit.fluxcd.io/v1beta2 ✔ imagepolicies.image.toolkit.fluxcd.io/v1beta2 ✔ imagerepositories.image.toolkit.fluxcd.io/v1beta2 ✔ imageupdateautomations.image.toolkit.fluxcd.io/v1beta1 ✔ kustomizations.kustomize.toolkit.fluxcd.io/v1beta2 ✔ ocirepositories.source.toolkit.fluxcd.io/v1beta2 ✔ providers.notification.toolkit.fluxcd.io/v1beta2 ✔ receivers.notification.toolkit.fluxcd.io/v1beta2 ✗ check failed
Git provider
No response
Container Registry provider
No response
Additional context
kubectl get nodes -o wide:
Code of Conduct