Closed carrodher closed 1 year ago
It appears the latest version 17.2.3
of the bitnami/kafka
chart isn't in the published index. The older 17.1.0
version is the most recent one I can find in the index.
Is this intentional or am I misunderstanding the steps that were taken as outlined above? Why wouldn’t the latest chart version be in the index, which is unfortunately only supposed to contain versions as recent as 6 months back?
What is the suggestion going forward? If we apply the workaround won't we be in the same situation again in 6 months (some of our in-use charts disappear from the current index.yaml)? I'm looking for some way to be able to update my helm charts versions in the future without changing repositories. I could set up a local service of some kind if that would work.
This is why this change cannot stand - basically what's been implemented is a rolling window of chart breakage. Every time a chart exceeds the 6 month window, someone's build will break if it relies on the index, and the only way to solve it is for users to create a new repo entry for every single bitnami chart that they consume, pointing to the specific commit for the chart index that includes the required version - this is clearly completely untenable, and makes the index almost useless, with upgrades being a silly dance of finding a commit in the history that has the version you need to upgrade to.
Please stop the damage now by reverting, so we can work out how to move forward.
There are a lot of smart ops and devops people on this thread I assume (maybe some even work for CDN companies) that can help with the issue. I can only speak for myself, but I represent and own a company and we are willing to pay money to have Bitnami support and treat containers and Helm charts as enterprise assets. Willing to pay monthly/annual fees as long as the uptime, support, (arm64 support, clearing throat) gets executed on.
There are entire companies that provide way less value than Bitnami yet literally print money. It's time we as developers stop being so cheap and support projects like Bitnami assuming they are willing to devote the engineering and capital resources required.
Point being, I'm open to breaking the free and open source model as long as the quality of Bitnami is raised and uptime and confidence is addressed.
I appreciate the challenge, however with all due respect, a one size fits all 6 months window is not a sensible approach.
Please consider
Unfortunately would feel compelled to consider finding a more stable alternative if this is not fixed.
Thanks!
As noted by @potiuk
This seriously undermines usefulness of using bitnami charts in the future for any serious project.
@carrodher please realize this change is undermining trust in bitnami, and it will be hard o build it back if this issue is not addressed in a senisble manner. Bitnami is used because developer find it provided good value, they are the ones to make the argument to buy a license. In absence of value; many will juat move on.
A potential long-term solution would be to generate repositories for each chart, that serve just that chart and its revisions - this would drastically reduce the required bandwidth, since users would only fetch the data for the specific charts they're interested in.
Obviously this would also be a breaking change, and require some engineering effort to implement, however this could be done in parallel with serving the existing legacy full-index in the mean time (stats would provide data on when it might be sensible to deprecate the legacy index).
Alternatively, just finding a CDN that is willing to carry the traffic would be simpler for all involved in the nearer term.
Discussions around ways to manage the problem really need to happen before breaking the world though.
CloudFlare literally caches files up to 512 MB for free:
Cloudflare cacheable file limits:
Free, Pro and Business customers have a limit of 512 MB.
https://developers.cloudflare.com/cache/about/default-cache-behavior/
@mprimeaux wrote: It appears the latest version
17.2.3
of thebitnami/kafka
chart isn't in the published index. The older17.1.0
version is the most recent one I can find in the index.Is this intentional or am I misunderstanding the steps that were taken as outlined above? Why wouldn’t the latest chart version be in the index, which is unfortunately only supposed to contain versions as recent as 6 months back?
This is a totally different topic. As pointed out in the CONTRIBUTING guidelines:
NOTE: Please note that, in terms of time, may be a slight difference between the appearance of the code in GitHub and the chart in the registry.
This is caused because we test all the charts on top of different k8s clusters (TKG, IKS, AKS, GKE, ...) before publishing them into the registry.
The case you mention is already published, see
$ helm repo update bitnami
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "bitnami" chart repository
Update Complete. ⎈Happy Helming!⎈
$ helm search repo bitnami/kafka
NAME CHART VERSION APP VERSION DESCRIPTION
bitnami/kafka 17.2.3 3.2.0 Apache Kafka is a distributed streaming platfor...
Also please note the change in the index.yaml was, at least at this moment, one-time action. Entries older than a specific date (6 months ago) were cleaned in a single PR, there is not a cron job doing it.
Also please note the change in the index.yaml was, at least at this moment, one-time action. Entries older than a specific date (6 months ago) were cleaned in a single PR, there is not a cron job doing it.
I think you totally miss the expectations here @carrodher of people who used the free chart provided by bitnami. It would have been way better if people knew what to expect, because they would have at least a chance to prepare their workflows for that..
This above statement of yours is the worst nighmare of anyone who would use the charts. It basically means that anyone using the repo should expect that similar action might happen any time, without any notice or warning, brreaking their workflows.
Is this the right way I am reading that statement ?
Is this what the "bitnami chart" free offering policy is now?
If so, it makes it useless. But maybe that is the final goal you want to achive. It definitely looks like your actions goal is to make sure the "non-paying" users stop using bitnami.
And that's fine. Nobody expects you to do that for free for ever. That was a voluntary choice of bitnami to do so before and theiy build their strength, value (up to the moment it was acquired) - so the "free model" was definitely working for you before to build the company and make an exit.
And it's ok if you decide to stop (ideally with some warning/notice) providing good service for non-paying (often open-source) users.
I think however it would be much more transparent and sincere if you just openly state it. It would be much more honest towards all those "non-paying" users who helped you to build the value and sell the company.
@potiuk this is a totally unrelated conversation. @mprimeaux asked about a recent version not appearing in the index.yaml
. I clarified that this is not related to the truncation of the index.yaml but related to how our automated test & release pipeline works.
What I said is that there is not any automation truncating this file that is executed every X time. It was a one-time action executed some days ago, so it is not possible that a recent Helm chart version not appearing in the index.yaml
was caused because of the truncation of the index.yaml
since there is nothing truncating this file again.
When this file needs to be truncated again in the future or if there is any automation implemented to truncate it automatically every X time, it would be communicated in advance, but this is not the case right now. Any other assumption without knowing the whole context of the previous answer is just that, assumptions.
Also please note the change in the index.yaml was, at least at this moment, one-time action. Entries older than a specific date (6 months ago) were cleaned in a single PR, there is not a cron job doing it.
This is critical information, and is not what is implied by the title (retention policy
), and content (we will reduce the size of the index.yaml by removing some old versions and keeping all versions for a period of time (6 months)
) of this issue, both of which imply that this would be an ongoing approach.
That this is not the case is positive on the one hand, since the ongoing impact should be less drastically bad, though at least at this moment, one-time action
does imply that this may be automated at some stage in the future. On the other hand, this change is impacting a not-insignificant number of people today.
Uncertainty about how this may be handled going forward, the drastic action taken by implementing this now without adequate communication, and the lack of any official response to the concerns raised across all the related issues for this change are cause for significant concern for anyone who has in the past relied on the bitnami chart repo for production workloads - it is very difficult to justify placing trust in this repository going forward.
This is quite unfortunate as it undermines all the great work that has gone into making this an otherwise wonderful resource.
@carrodher Thanks for your clarification. At the time, I wasn't sure if the issue we were experiencing was related to the truncation.
When this file needs to be truncated again in the future or if there is any automation implemented to truncate it automatically every X time, it would be communicated in advance, but this is not the case right now. Any other assumption without knowing the whole context of the previous answer is just that, assumptions.
@carrodher - the assumption are caused by bitnami actions and complete lack of explanation, setting clear policy and even attempting to acknowledge that you understand problems of your users. One can see that as super arrogant approach. One that shows that bitnami cannot be trusted that they see their non paying users as such even though they build their value banking on those many users and popularity of bitnami built this way.
I personally sse that as rather unprofessional approach and lack of self-reflection and recognising that your actions are relly causing great problems to the users who trusted bitnami. After those few days when Bitnami was not even able to admit "yes we've done that hastily, we are working on solving the problem and responding to our user concernrs" we just see complete lack of communication, empathy, lack of self-reflection.
This is very, very bad for the brand, even if Bitnami wanted to make it useless for non-paying users, they probably chose the worst possible approach. Or if they did that unknowingly and without realising the consequence that means that this was an extremely unprofessional behaviour (which makes even more damage to Bitnami brand)
Pick your poison. But I am afraid - both are basically sending Bitnami's brand's reputation to the bottom of the ocean - after Bitnami even failed to react to the reactions of their users.
@carrodher said
When this file needs to be truncated again in the future or if there is any automation implemented to truncate it automatically every X time, it would be communicated in advance, but this is not the case right now.
Well, that's scary. Please acknowledge you understand that truncating the index file in absence of a policy that preserves latest stable minor versions is just a bad engineering decision in all dimensions. Also, and perhaps even more worrying, it is totally at odds with the bitnami stated philosophy. Indeed this change is breaking the bitnami brand's very promise, as stated on your webpage (emphasis by me).
Bitnami makes it easy to get your favorite open source software up and running on any platform, including your laptop, Kubernetes and all the major clouds. In addition to popular community offerings, Bitnami, now part of VMware, provides IT organizations with an enterprise offering that is secure, compliant, continuously maintained and customizable to your organizational policies.
I humbly suggest splitting each chart into it's own repository, which will greatly reduce the file size, and make more sense from a lot of points of view :)
@nodesocket So I gave my local bitnami repo the old college try and got bogged down in a jq/yq mess. Given the additional clarification that they've given, namely that 6 months is not a rolling window but an arbitrary cutoff, I ultimately decided it would be easier to add a "bitnami-old" repo in addition to the new, slimmer "bitnami" repo and just update my helm releases accordingly. I'm using these chart through flux so updating the resources it was complaining about was pretty trivial for my case, but I'm sure it isn't for everyone. Sorry I don't have a better solution for you.
This is an extremely disappointing move that will add a lot of unnecessary engineering effort on many teams out there. It also make us question the trust we put into using bitnami provided charts. Will definitely think twice before choosing to use bitnami chart next time.
Edit: +1 for transparency on the change here; -1 for the new retention policy
This will also break GitLab's Auto DevOps feature. In case anybody runs into the same issue (took me a few hours to track down and eventually "fix"/work around). If you see this error in your deployment job:
Error: failed to download "bitnami/postgresql" at version "8.2.1"
Then you need to update to the latest auto-deploy-image
version of GitLab by setting the CI/CD variable AUTO_DEPLOY_IMAGE_VERSION
to v2.28.2
in your project. The workaround pins the helm repo to use the old bitnami index.
Even if you have to trim it down any thoughts to at least including the latest major.minor versions for each set in the default index.yaml . Some of these artifacts that are being removed are less than a year old.
im afraid to post this, but i dont think the recent pruning of chart versions in the main branch manifest solved the original issue of the CDN timing out or rejecting
Exception: invoke of kubernetes:helm:template failed: invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: failed to pull chart: looks like "https://charts.bitnami.com/bitnami" is not a valid chart repository or cannot be reached: stream error: stream ID 1; INTERNAL_ERROR; received from peer
Even if you have to trim it down any thoughts to at least including the latest major.minor versions for each set in the default index.yaml . Some of these artifacts that are being removed are less than a year old.
@datadidit Please, read the article. Nothing has been removed.
Even if you have to trim it down any thoughts to at least including the latest major.minor versions for each set in the default index.yaml . Some of these artifacts that are being removed are less than a year old.
@datadidit Please, read the article. Nothing has been removed.
Correct, I mistyped above thanks for the correction. But still curious about possibly keeping the latest index.yaml
. Which would allow teams not to need to make a major version change of the dep and or need to add the additional repo to get the old version of the chart.
@datadidit We are working to accommodate all the feedback received. We will get back here soon.
Hi everyone,
Me and the rest of the Bitnami team sincerely apologize for the issues you experimented related to the decision of non-indexing an old version of Helm charts into the main index.yaml
file.
The main issue that triggered that action is that index.yaml
is getting thousands of terabytes of download traffic per month, the AWS CDN itself was increasing exponentially the number of errors every week and, that rate was so high that was also affecting our own release pipelines.
This decision was risky and not communicated properly with enough anticipation, sorry about it. We underestimated the impact of the de-indexing older versions even all the versions continue being publicly available.
In order to move on, we will continue with a policy of not indexing versions older than 6 months in the official Helm chart repository https://charts.bitnami.com/bitnami
. This index will continue available into a CDN to speed up the transfer and we need a de-listing policy to keep it smaller and to avoid error rates.
We will work on implementing and maintaining a new full Helm chart index file that Bitnami will keep up-to-date with all the releases. This index will be available in GitHub itself, not into the CDN, so users who wants to keep using older versions and the size or speed is not critical can continue using a full index.yaml
file or users can easily mirror it. All the Helm charts tgz will continue being publicly available as always.
On the other hand, we are working on fully supporting Helm Charts in OCI registries. That is fully supported by Helm since 3.8.0 version. It is still early to know what would be the next steps but we will share that info whenever we get clarity about it.
Thanks for at least acknowledging the issue and establishing the policy. At least we know what to expect. Looking forward for the index tha will be maintained and have some solid maintenance policies too.
@beltran-rubo thanks for the update and clarification. Can you provide documentation/guide on how we can self-host? We may just want to roll our own mirror of Bitnami charts using our own CDN provider.
This index will be available in GitHub itself, not into the CDN, so users who wants to keep using older versions and the size or speed is not critical can continue using a full index.yaml
We are forced to use the old version of index.yaml as we have very specific version constraints for RabbitMQ, Redis, and PostgreSQL that cannot be upgraded (at least easily). However speed and more importantly consistency and reliability are important as these are production clusters. Thus, why we are looking to self host a mirror of Bitnami charts.
@beltran-rubo thanks for the response, I would have liked to see more information on the the status and progress of the Helm client supporting compression because that seems like a very obvious route to move forward.
v3.9.1
this has got to very easy)https://charts.bitnami.com/v2/bitnami
Then, at the very least, you can respond to your clients that you're trying to follow the semver
versioning convention/contract with the bug caused by the demands on the CDN being available as a patch release.
You can see my numbers above on the compression gain, and no matter what this is a solid improvement without any breakage on the truncation (smaller size, more data). As others have pointed out too, CloudFlaire has transparent http compression too. This could be enabled seemless without duplicating the end point if the client right now, which doesn't compression, provides appropriate HTTP Accept
headers.
As a last note, if all that falls on deaf ears or isn't possible I think at least changing the pruning policy such that the last stable-release of a chart isn't pruned, regardless of age is something that should be addressed. That people are complaining this happened seems like a bug even if Bitnami does prefer the solution they've created.
@beltran-rubo thanks for the response, I would have liked to see more information on the the status and progress of the Helm client supporting compression because that seems like a very obvious route to move forward.
If you read the upstream Helm issue, the problem is that various hosts send gzipped archives with mime types that cause them to be decoded on the wire, resulting in them failing to be unpacked locally, since they've been pre-decompressed.
Regardless of how this issue has been mismanaged here, bitnami has no control over upstream Helm, and in any case, this does not seem like an easy problem to solve, as it's an interaction between the Go stdlib and specific hosting providers, neither of which are likely to change for this specific problem.
The main issue that triggered that action is that
index.yaml
is getting thousands of terabytes of download traffic per month, the AWS CDN itself was increasing exponentially the number of errors every week and, that rate was so high that was also affecting our own release pipelines.
Have you at all considered an alternative CDN, as has been suggested a few times now?
In order to move on, we will continue with a policy of not indexing versions older than 6 months in the official Helm chart repository
https://charts.bitnami.com/bitnami
. This index will continue available into a CDN to speed up the transfer and we need a de-listing policy to keep it smaller and to avoid error rates.
A rolling 6-month chart index just means that things will be constantly broken for users, for reasons already explained, and that should be obvious.
We will work on implementing and maintaining a new full Helm chart index file that Bitnami will keep up-to-date with all the releases. This index will be available in GitHub itself, not into the CDN, so users who wants to keep using older versions and the size or speed is not critical can continue using a full
index.yaml
file or users can easily mirror it. All the Helm charts tgz will continue being publicly available as always.
Then everyone who needs a stable chart index will be forced to use this version, and you've just moved the CDN problem to Github.
These are band-aids, not solutions.
@pdf I reviewed that issue more carefully, and I did miss some that the first time I went through it. You may be interested in response to it here: https://github.com/helm/helm/issues/2916#issuecomment-1148087840 I don't think this was an issue with core Go stdlibs. I think the disabling of on the wire compression by helm is probably a mistake and a result of specifically Atlassian's Bitbucket implementation. But I didn't realize that Bitnami wasn't the driving force behind helm, so your point remains. Helm should undo the patch, not work around the broken Atlassian implementation (it's not their problem), and then transparent compression by CloudFlare should just work. But, I understand now, in light of that bad patch (my opinion) to work around a BitBucket bug, and without control of Helm, there is nothing really Bitnami can do regarding compression.
Hi, thank you for keeping this thread alive. Obviously we have been hit by this change, too. A bit surprising that such a small change can cause such disruption. For production we always package the Helm Chart and its dependencies, so for us is causing issue only on our CI/CD internal pipelines.
With the comments from @beltran-rubo, I suspect what will be noticed is that no-one (hypothetically) will be using the official Helm Chart repo url (https://charts.bitnami.com/bitnami) but instead use the full version hosted on GitHub. Not sure why there is a need for a CDN repo that only contains releases from the past six months. Unless you update your Helm Chart dependencies every 4 months, using the CDN repo could be very erroneous and disruptive.
Unless you update your Helm Chart dependencies every 4 months, using the CDN repo could be very erroneous and disruptive.
Yep. As I explained before the current "main index" policy makes it essentially useless for any serious use.
I really look forward that btinami will keep their promise and introduce (as @beltran-rubo mentioned in https://github.com/bitnami/charts/issues/10539#issuecomment-1147759983 ) a usable solution:
We will work on implementing and maintaining a new full Helm chart index file that Bitnami will keep up-to-date with all the releases.
I really hope we all collectively gave Bitnami/VMWare/Broadcom a lot to think about. They broke a lot of the use of their free chart by rush and not-well-thought decisions, without realising consequences. They at least finally acknowledged that. It took 70+ comments and more than 4 days, but they finally did acknowledge they screwed up.
They could have approached it differently, but hopefully they learned and approach it differently in the future and make an offering that will keep their non-paying users happy (or not if they make such a business decision - we cannot force them to).
I think we should give them some breathing space, and simply observe their efforts of trying to build their reputation back. It will be long and process before the community will again trust them, but well, let's give them a chance.
I am for one closely looking at what is happening here to see how they will manage the situation - the "popcorn" is ready - so far it's been almost like a thriller movie to watch :D
we are working on fully supporting Helm Charts in OCI registries
@beltran-rubo is there an issue we can track? that would be amazing.
We've been using OCI chart images since the beta for our application charts; they are great.
FYI but your suggested workaround doesn't seems to work with a lot of the tools of the Helm ecosystem (example: helm-releaser & its official GitHub action nvm, it doeshelm/chart-releaser-action
).
See this post for more information https://github.com/helm/helm/issues/2916#issuecomment-1148107258 The results are predictable, from the patch notes.
And the result shows a significant(~14x) reduction for transferred traffic: 1,41mb vs 19,72mb.
Because this enables on-the-wire compression for all package-list end points, the question here is whether or not it would be ok to enable on-the-wire compression in CloudFlare, which should only perform on-the-wire compression for clients that request it. When enabled, this would provide the immediate benefit for newer patched Helms, but older Helm clients would still be downloading the uncompressed version.
I have no idea how often people upgrade their Helm.
@EvanCarroll nice to see the gzip compression PR, but it would require updating the helm binary on all hosts correct? Not really a solution for us, as our Helm version has to stay pinned at 3.8.2
because 3.9.0
breaks for us on EKS. See https://github.com/helm/helm/issues/11007.
@nodesocket benefits would rely on each user updating their helm version, so until a critical mass of users updated their helm install, the existing CDN would continue to fail regularly.
Your problem with EKS is solved by updating the client auth version in your kubeconfig, and has nothing to do with Helm specifically (kubectl 1.24 removed support for the deprecated v1alpha1 version): https://github.com/helm/helm/issues/10975#issuecomment-1132139799
@pdf we don't use awscli
but we use aws-iam-authenticator
. Our install script currently simply does:
curl -o aws-iam-authenticator https://amazon-eks.s3.us-west-2.amazonaws.com/1.21.2/2021-07-05/bin/linux/amd64/aws-iam-authenticator
sudo chown root:root aws-iam-authenticator
sudo chmod 755 aws-iam-authenticator
sudo mv aws-iam-authenticator /usr/local/bin
@nodesocket this is not the place to troubleshoot that issue further - there is a solution, you'll need to find it elsewhere though.
@nodesocket this is not the place to troubleshoot that issue further - there is a solution, you'll need to find it elsewhere though.
I appreciate that we can't solve every individual problem on this issue thread, but just kicking the can saying there's a solution somewhere without any pointers is not helpful. If you don't know if there's a solution, say so. There's no shame in not knowing everything, for sure. If you do know of a solution, provide a link so others in the same situation can find a path.
I already linked to the generalised solution in my previous comment. Making that work with the helper or whatever may require additional research, this is not the place for that discussion.
I already linked to the generalised solution in my previous comment. Making that work with the helper or whatever may require additional research, this is not the place for that discussion.
Fair enough. I didn't read enough of the thread history on this particular bit and jumped at the apparent lack of empathy/understanding given how this whole thing started. My fault, I apologize.
Not really a solution for us, as our Helm version has to stay pinned at 3.8.2 because 3.9.0 breaks for us on EKS. See https://github.com/helm/helm/issues/11007.
Just to be clear, if Helm devs think this is a bug fix for 3.8.2 there is no reason why it has to be stuck in 3.9. There is no reason why it can't go into 3.8 and produce 3.8.3 too. It seems like a bug fix to me. It's not new functionality. We know what patch broke it. We have a bug fix/patch that will fix it. It seems a 3.8.3 with compression on the package list would solve your problem. This is a pretty normal ask too.
The real question here is _if_I Helm upstream obliged and produced a 3.8.3 would Bitnami/CloudFlare find it acceptable, having an upgrade path for people on 3.8.2, but also not being able to force them into upgrading.
I'm not getting into the adoption of 3.9.0 either here, intentionally: I agree that conversation is probably better suited elsewhere.
Me and the rest of the Bitnami team sincerely apologize for the issues
Thanks for recognizing this was a bad move. Good to know bitnami is listening.
However please note that the issue persists.
a) The work around is not really practical and just shifts the problem from your CDN to github, plus(!) keeps many workflows broken. Not helping and bound to cause more frustration.
b) your "comms fix" is not really a fix either, in part because the comms is not the key to what is amiss here, but more importantly because it just reiterates the same bad approach. We are not critizing the words, but what they mean.
c) please acknowledge the real issue: what may look like a smart move on your end (truncating the index) is a major issue for most of your users bc it totally breaks basic and long-standing assumptions (ie. the index is immutable, as it should be) and replaces them with a broken-by-design policy that is bound to just continue the misery forever. Not a good premise.
In order to move on, we will continue with a policy of not indexing versions older than 6 months in the official Helm chart repository
As stated by @potiuk and many others that is not a solution. A solution should solve the problem, not sugar coat the wrong approach.
Here's my 5 cents. I may be wrong, but hear me out.
IMHO the better solution is to remove intermediate chart versions, say for all but the last three minor versions of each chart. So the index would continue to have:
Just one entry, namely the latest, for each previous chart version
I did not run the numbers, but by my guesstimate this should reduce the index file massively. It still leaves users with the - manageable - caveat of having to update to the latest minor version of their resp. charts.
Any forced upgrade is a hassle but at least would allow users to upgrade their charts in a controlled manner rather than having to scramble to some time consuming index replacement (which in many cases will not solve the issue, eg. due to chart interdependencies referring the official index), or to upgrade all charts to the latest version and keep doing this every so often (which may not even be possible due to pinned versions of the underlying software; in any case upgrading charts can be disrupting in unexpected ways. Think dragons)
The risk to bitnami is that once people realize they have to engage in some major engineering activity to get out of this, with no sensible approach offered by bitnami,, people will start looking elsewhere and may not come back.
I wish we could stop talking about better ways to truncate an index file. No one does this. Perl does pretty much the same thing with CPAN. You can download their index file here,
It's 2.3MB compressed. With 249,747 packages, never truncated. Even if your need more meta information, it shouldn't be that much different in an index.
just for the record: other providers are dealing with this by having a separate "helm repository" per application, and not every application mangled together in one registry. so the index can be fairly smaller, and not everybody is syncing everything they even not need.
for example: https://kubernetes.github.io/ingress-nginx/index.yaml
just for the record: other providers are dealing with this by having a separate "helm repository" per application, and not every application mangled together in one registry. so the index can be fairly smaller, and not everybody is syncing everything they even not need.
for example: https://kubernetes.github.io/ingress-nginx/index.yaml
But then, we would need to helm repo add bitnami-rabbit ...
for each and every chart? That seems like an anti-pattern.
just for the record: other providers are dealing with this by having a separate "helm repository" per application, and not every application mangled together in one registry. so the index can be fairly smaller, and not everybody is syncing everything they even not need. for example: https://kubernetes.github.io/ingress-nginx/index.yaml vs: https://charts.bitnami.com/bitnami/index.yaml
But then, we would need to
helm repo add bitnami-rabbit ...
for each and every chart? That seems like an anti-pattern.
I think generally speaking most people are only using a handful of charts?
It’s not much different than adding another chart/repo that isn’t maintained by bitnami?
@mattb18 think about how things like KubeApps work (and also synaptic/apt, yum/dnf, and every other similar service). They're based around the idea that there is one repo with an index of all the packages.
But then, we would need to
helm repo add bitnami-rabbit ...
for each and every chart? That seems like an anti-pattern.
This is quite common. If you look at any of the large community-driven chart aggregators, this is how they're built.
@mattb18 think about how things like KubeApps work (and also synaptic/apt, yum/dnf, and every other similar service). They're based around the idea that there is one repo with an index of all the packages.
They're absolutely not. They aggregate multiple repositories into a pool of applications. Take a look in /etc/apt/sources.list
, or /etc/yum.repos.d/
. Or indeed, more pertinent, helm was specifically designed to use multiple repos.
As reported in this issue (https://github.com/bitnami/charts/issues/8433), in the last few times we are facing some issues with the
index.yaml
associated with the Bitnami Helm charts repository.Current situation
After some investigation, it seems the root cause is related to CloudFront reaching some limits due to the volume of traffic when serving the
index.yaml
. Thisindex.yaml
contains all the Bitnami Helm charts history (around 15300 entries), producing a pretty fat 14MB file. Given the size of the file and the volume of traffic, thousands of terabytes of download traffic per month are being generated.One of the alternatives considered was the use of compression at CloudFront, in that case, this solution doesn’t work since compression is not used by the Helm client (
helm
) itself (see https://github.com/helm/helm/pull/8070) so it doesn’t solve the reported issue.Mitigation
As the first line of action, we will reduce the size of the
index.yaml
by removing some old versions and keeping all versions for a period of time (6 months).⚠️ Please note this action is not removing/deleting any Helm chart, packaged tarballs (
.tgz
) won't be removed, this action is only affectingindex.yaml
used to list the Helm charts. Previous versions of theindex.yaml
can be used to install old Helm charts.Please note Helm charts tarballs (
.tgz
) won't be removed, this action is only affectingindex.yaml
.Result
Applying this approach (https://github.com/bitnami/charts/pull/10530), we obtained the following results:
Producing a reduced 3.5MB
index.yaml
.🔧 Workaround for previous versions
The
index.yaml
is stored in this repository under the index branch, users should be able to use any commit in that branch to add a previous version of theindex.yaml
.helm repo add