Closed mmcaya closed 3 years ago
Thank you for the report. Very sorry for the inconvenience.
The details that you've provided are helpful and really make a difference, the bisect and the other investigation you've done. 🙇 I have created a prerelease build with the PR reverted that is in question, can you please confirm this resolves the issue?
The image is at docker.io/kingdonb/flux:revert-pr-3381-1be3ff15
or you can build it from the latest commit on the revert-pr-3381
branch.
I will take a look later and try to understand the details, but for now my main concern is to determine if this report is correct, then revert the bad PR and get a new release candidate ready for 1.23.1. We should be able to publish it early this week if the report can be confirmed.
Thanks for the quick response.
I've also locally tried using both kustomize 3.8.4 and 3.8.10 with flux 1.22.2 to eliminate that as the potential culprit (as it was also upgraded in flux 1.22.2), and so far have seen the issue with either kustomize version
Following up on this note from the original description ^^^
I've continued testing both the original branch and new patch on different kustomize versions as reproducing the issue consistently in a local setup was still difficult, and confirmed kustomize v3.8.8+ is causing problems for larger checkouts. They added a hard coded timeout of 27s to all git calls used in remote loading, which very much seems related to the issue I reported.
See:
The original patch/PR I referenced may be a red herring here, at least for the specific issue we are encountering. I'll comment with additional details as I have time to test.
Thank you for following up with more details. I've linked a few issues to this one for visibility. If I read your last note correctly, the issue would be solved by a revert of the Kustomize version included here, to the release: kustomize v3.8.7
That's very convenient as we have at least one user who initially requested at least v3.8.5, in #3457 (which has been renamed since it was opened after much debate.)
There is docker.io/kingdonb/flux:revert-kustomize-3-8-10-b30133a2
now which implements #3504, if you can please test and let me know if this resolves your issue.
There is
docker.io/kingdonb/flux:revert-kustomize-3-8-10-b30133a2
now which implements #3504, if you can please test and let me know if this resolves your issue.
Thanks, I'll have this on my list for tomorrow (7/15) to review and provide confirmation of resolution to the issue.
There is
docker.io/kingdonb/flux:revert-kustomize-3-8-10-b30133a2
now which implements #3504, if you can please test and let me know if this resolves your issue.
Built and deployed the above patch, which has been running for 8+ hours without any /tmp
dir growth issues. kustomize 3.8.7
seems to be operating as expected.
Thank you for the confirmation! I am currently wrestling with e2e tests and will try to get at least a release candidate out before the weekend. Should expect to see Flux 1.23.1 out some time next week with this change included.
We hit some backwards compatibilty issues going from 1.22.0 -> 1.23.0 which Seems to be kustomize related. Reverting back for now but anxious for 1.23.1 release
@aleclerc-sonrai @mmcaya The PR which is expected to resolve this has just been merged to master
fluxcd/flux-prerelease:master-f9fe2abb
will be ready in a few seconds, which shall resolve this issue. I expect the 1.23.1 release to likely be out tomorrow, but since there are no chart changes you can upgrade to this image in advance, if you want to resolve this faster by using a prerelease.
We've already had one confirmation given that this PR evidently fixes the issue, from @mmcaya (thank you!)
@aleclerc-sonrai if you can try out the prerelease and let us know, in case there was a different issue in your case.
Thanks everyone for your patience.
The helm chart has been updated in #3510
Describe the bug
After upgrading to flux 1.22.2, k8s clusters immediately saw a spike in flux pod disk consumption due to
/tmp
not being properly cleaned up after sync loops involvingkustomize build
commands in our.flux.yml
.Disk space was quickly consumed leading to flux pod evictions due to disk pressure
Initial investigation suggests the update from https://github.com/fluxcd/flux/pull/3381 is errantly or prematurely cancelling the sync loop, leaving orphaned data in the
/tmp
directory typically cleared bykustomize
directly when it completes execution.I haven't traced through the entire code execution path, but the context used during the calls to
kustomize build
(or whatever is in the generators section of the.flux.yml
) viaexecCommand
(link below) already had a context timeout using the same sync timeout settings as the PR noted above, which now means the same context was wrapped with a timeout twice.See: https://github.com/fluxcd/flux/blob/master/pkg/manifests/configfile.go#L492
Issue is not present when reverting to
1.22.1
.I've also locally tried using both kustomize 3.8.4 and 3.8.10 with flux 1.22.2 to eliminate that as the potential culprit (as it was also upgraded in flux 1.22.2), and so far have seen the issue with either kustomize version
To Reproduce
Steps to reproduce the behaviour:
Sample manifests to reproduce issue:
.flux.yaml
bases/aws-load-balancer-controller/kustomization.yaml
overlays/aws-load-balancer-controller/kustomization.yaml
Expected behavior
Any generator commands should not prematurely exit from context timeout tied to
cancel
calls of parent functions, and should rely on the context timeout and error handling already present inexecCommand
As of now, we are pinned to version 1.22.1 until this issue gets patched, or we are able to start our migration to flux v2.Logs
Sample
tmp
dir output from a live1.22.2
instanceRight after startup
After initial sync (~4 minutes later), already containing orphaned data
30 Minutes later with orphaned data growth
For comparison, here is data from flux 1.22.1 working as expected on the same git repo with no data being orphaned
Initial Startup
~30 minutes later
~50 minutes later
Additional context