Open shabbirsaifee92 opened 3 weeks ago
@shabbirsaifee92 is the subscription to the Redis image repo and the step to update a values.yaml
with the Regis image's tag actually necessary to reproduce this?
I ask, because I don't believe that should have any bearing on the step that updates the Chart.yaml
... but stranger things have happened.
I figured some clarity on this might help get to the bottom of this quicker.
@shabbirsaifee92 is the subscription to the Redis image repo and the step to update a
values.yaml
with the Regis image's tag actually necessary to reproduce this?I ask, because I don't believe that should have any bearing on the step that updates the
Chart.yaml
... but stranger things have happened.I figured some clarity on this might help get to the bottom of this quicker.
@shabbirsaifee92 is the subscription to the Redis image repo and the step to update a
values.yaml
with the Regis image's tag actually necessary to reproduce this?I ask, because I don't believe that should have any bearing on the step that updates the
Chart.yaml
... but stranger things have happened.I figured some clarity on this might help get to the bottom of this quicker.
No I don't believe they are necessary, I just wanted to provide the information about the setup I have
Thanks for the info @shabbirsaifee92
Does this happen for any umbrella chart or those that rely on specific upstream Helm repositories and/or charts? In addition, does the problem potentially go away when you change the dependency to make use of an OCI Helm chart?
Does this happen for any umbrella chart or those that rely on specific upstream Helm repositories and/or charts? In addition, does the problem potentially go away when you change the dependency to make use of an OCI Helm chart?
I'll try it with oci registry. The setup we have always uses an umbrella chart for the real helm chart hosted on a repository.
Does this happen for any umbrella chart or those that rely on specific upstream Helm repositories and/or charts? In addition, does the problem potentially go away when you change the dependency to make use of an OCI Helm chart?
I'll try it with oci registry. The setup we have always uses an umbrella chart for the real helm chart hosted on a repository.
Hey thanks! Using OCI registry everywhere in the chart, warehouse and stages did the trick. Promotions are no longer erroring out!
Do we know the reason it happens?
The OCI charts are lighter, in terms of both memory and disk usage.
What the precise culprit is in your scenario, I can't tell based on the information you shared. But I can imagine that it has something to do with the size of the repository indexes, and it either being temporarily stored on disk (which potentially is an in-memory tmpfs), in combination with the parsing of the index YAML also consuming quite a bit of memory (for which I added a --json
to Helm, but this has not been widely adopted).
Does your controller stay alive at the point it fails? Or did it get OOMKilled
by any chance?
The controller is not getting OOMKilled, I initially thought the same but since container/pod is fine not sure what is killing the helm dependency update process..
Going to try to reproduce this in this case, to address the issue and/or potentially see if Kargo itself can be more upfront about the precise issue it is running into.
@shabbirsaifee92 can you share more details about e.g. the number of Bitnami charts you have in your umbrella chart? I have thus far been unable to reproduce it, even with nearly a dozen different (Bitnami) chart dependencies.
@hiddeco
# Chart.yaml
apiVersion: v2
name: redis
description: A Helm chart for deploying redis
type: application
version: 1.0.0
dependencies:
- name: redis
repository: https://charts.bitnami.com/bitnami
version: 19.0.0
---
# values.yaml
deployment:
replicas: 1
image:
name: redis
tag: '1.0.0'
this was literally the Chart and Values file I am using. Not sure why its failing in my case though
Checklist
kargo version
.Description
when trying to update the helm umbrella chart, the
helm dependency update
process is getting killed before it can finish. So very rarely it goes through but most of the time it gets killed.Screenshots
If you keep trying to promote eventually it builds and creates the PR
Steps to Reproduce
values.yaml
deployment: replicas: 1 image: name: redis tag: '1.0.0'
Paste the output from
kargo version
here.time="2024-06-05T19:26:34Z" level=info msg="began promotion" freight=d54857f5d17c5e76261d4ff116afd003e1d9064a namespace=redis promotion=staging.01hzmxw2ttka1hxphvpn42qzwg.d54857f stage=staging time="2024-06-05T19:26:38Z" level=error msg="error executing Promotion: error executing Git-based promotion mechanisms: error executing Helm promotion mechanism: error updating dependencies for chart \"helm/redis/environments/pre-prod\": :error running
helm dependency update
for chart at \"/tmp/repo-155493290/repo/helm/redis/environments/pre-prod\": error executing cmd [/usr/local/bin/helm dependency update /tmp/repo-155493290/repo/helm/redis/environments/pre-prod]: Getting updates for unmanaged Helm repositories...\n" freight=d54857f5d17c5e76261d4ff116afd003e1d9064a namespace=redis promotion=staging.01hzmxw2ttka1hxphvpn42qzwg.d54857f stage=staging time="2024-06-05T19:26:38Z" level=info msg="promotion Errored" freight=d54857f5d17c5e76261d4ff116afd003e1d9064a namespace=redis promotion=staging.01hzmxw2ttka1hxphvpn42qzwg.d54857f stage=stagingPaste any relevant application logs here.