Open michmike opened 4 years ago
/cc @pickledrick
Related #1190
There is a contour chart in the helm/charts repo but it uses an older image and the deployment specs don't have the Envoy DaemonSet. Maybe contributing to that one instead of a creating something new would be better.
@stefanprodan I was under the impression that helm was moving away from this central chart repository model. If this is not the cause then it is my overwhelming preference to continue to maintain the original chart.
@pickledrick is the maintainer of the chart in the help repo. I've discussed this with him and we agreed that it makes sense to host the Contour char in the projectcontour Github org, given that the helm charts repo is deprecated.
I think that the next steps here are to agree on the broad outline of a transition plan and talk about how to manage the YAML without too much duplication.
Hey y'all, you might benefit from the Velero's transition plan: https://github.com/vmware-tanzu/helm-charts/issues/2
has the new git repo been created?
After submitting https://github.com/projectcontour/contour/pull/2374 I am familiar enough to do a first PR with a basic chart if community contributions are ok at this stage
@pickledrick Are you able to work with @cortopy on this? I'm happy to shepherd and do any repository admin that's needed.
any updates on this?
any updates on this?
Unfortunately no. We need a contributor to take this on. I'll be happy to shepherd changes from anyone who wants to tackle this.
Since there's some interest on this, I can share here some work I've been doing, it's still based on 1.4.0 but I don't expect major changes, and I'm still in the process of making a public chart repository for our company so for now it's just the code here
https://github.com/skeeterhealth/helm-charts/tree/master/charts/contour
@giorgioazzinnaro If you would like to help with an official helm chart for the project, I am happy to shepherd this work. I have a couple of other people (@georgegoh, @robinfoe) interested in helping too.
What I think needs to happen is:
Absolutely, I'd be glad to help! I think we could start from deciding in which repo we want to work so that we can bring all the discussions and code changes there? Did I understand correctly that we could be using the VMware Tanzu Helm charts repo for this, or would you rather have one dedicated to Contour? (I'm unsure what the project governance looks like..)
On Jun 14, 2020, at 1:55 AM, Giorgio Azzinnaro notifications@github.com wrote:
Absolutely, I'd be glad to help! I think we could start from deciding in which repo we want to work so that we can bring all the discussions and code changes there? Did I understand correctly that we could be using the VMware Tanzu Helm charts repo for this, or would you rather have one dedicated to Contour? (I'm unsure what the project governance looks like..)
If we need a repo, I think that it’s best to create it within the projectcontour GitHub org. I can do that for you when you are ready.
@giorgioazzinnaro are you on kubernetes.slack.com? It would be great to continue the conversation there and get this onboard officially.
FYI Bitnami maintains a Contour Helm chart here https://github.com/bitnami/charts/tree/master/bitnami/contour and they build their own container images https://github.com/bitnami/bitnami-docker-contour
I would personally prefer to use the official Contour/Envoy container images, but that chart seems to have all the options one would need to deploy Contour.
this is very cool. i didn't know that @stefanprodan . @jpeach , if the bitnami chart is adequate and sufficient, maybe we can start with that and over time help the bitnami team make that helm chart better (if that's necessary)
I use the bitnami chart and override the images and tags without issues. But I had to sync some changes back to their repo:
It would be nice, if someone keeps the changes in the manifests here in sync with the helm charts at that time the changes occur.
What I sometimes like about the container images from bitnami is that there are some debugtools in it since they based on minideb/debian. Official images variants with at least busybox in it would be nice - like the guys from fluent-bit for example. For all versions of their images it exists a -debug
variant which helps troubleshooting when needed.
@mkilchhofer I forked the bitnami repo recently and synced up the charts with the canonical manifests as you pointed out.
It's here https://github.com/georgegoh/charts/commits/master, and I'll look into a PR later this week.
@mkilchhofer thanks for the suggestion about debug images, could you log a separate issue for that please? It seems pretty reasonable to me, but I'd like everyone to have a chance to talk about it.
@mkilchhofer thanks for the suggestion about debug images, could you log a separate issue for that please? It seems pretty reasonable to me, but I'd like everyone to have a chance to talk about it.
Done :) https://github.com/projectcontour/contour/issues/2789
@mkilchhofer I forked the bitnami repo recently and synced up the charts with the canonical manifests as you pointed out.
It's here https://github.com/georgegoh/charts/commits/master, and I'll look into a PR later this week.
Cool :+1: Maybe I then could close https://github.com/bitnami/charts/pull/3381.
Cool 👍 Maybe I then could close bitnami/charts#3381.
Hi @mkilchhofer I think the changes you made are very close to what I have in my repo. A few differences:
contour-leaderelection
role as it's no longer needed in templates/rbac.yaml
.I also removed the crds from the templates
directory since crd-install is no longer supported in Helm 3(does bitnami require compatibility with Helm 2?)
I could create a new PR after yours is merged to get those items in. What do you think?
@georgegoh please take into account that Helm does not support CRDs upgrades when placed in crds
dir. Upgrading will be broken if Contour comes with a CRD change, since Helm v3 ignores the crds dir on upgrades.
Hi @mkilchhofer I think the changes you made are very close to what I have in my repo. A few differences:
- I remove the
contour-leaderelection
role as it's no longer needed intemplates/rbac.yaml
.- General matching of the rbac.yaml to the rbacs defined in this project.
I also removed the crds from the
templates
directory since crd-install is no longer supported in Helm 3(does bitnami require compatibility with Helm 2?)
I think they want compat to helm2, yes. But I am not an member of bitnami ;) - you need to ask them to be really sure.
I could create a new PR after yours is merged to get those items in. What do you think?
Sure, I then switch the PR back from "draft" to "Ready for review". In the meantime I went through the CRD changes again and dropped all the *.contour.heptio.com CRDs. Note that helm does not touch CRDs when they already exists. Maybe you could mention this bahavior with a reference to your guide Upgrading Contour in the README.md
of your PR?
@georgegoh please take into account that Helm does not support CRDs upgrades when placed in
crds
dir. Upgrading will be broken if Contour comes with a CRD change, since Helm v3 ignores the crds dir on upgrades.
How to helm-based deploys typically handle this then? Do operators need to upgrade CRDs manually?
https://helm.sh/docs/chart_best_practices/custom_resource_definitions/ 🤔
From what I've seen reviewing that link, @jpeach and helm/helm#5781, looks like Helm will do minimal CRD management. To that end, I suggest we consider adding a single URL for just the CRDs for a given version of Contour, similarly to how we have the quickstart URL https://projectcontour.io/quickstart/contour.yaml
. Maybe https://projectcontour.io/install/crds.yaml
or something?
This would only be for CRDs, because of the ordering requirements for them. This means that the Helm chart will include the most recent version at the time it was cut, but upgrade instructions will include installing the CRDs from the quick location first.
From what I've seen reviewing that link, @jpeach and helm/helm#5781, looks like Helm will do minimal CRD management. To that end, I suggest we consider adding a single URL for just the CRDs for a given version of Contour, similarly to how we have the quickstart URL
https://projectcontour.io/quickstart/contour.yaml
. Maybehttps://projectcontour.io/install/crds.yaml
or something?
I think that there's benefit in doing this.
This would only be for CRDs, because of the ordering requirements for them. This means that the Helm chart will include the most recent version at the time it was cut, but upgrade instructions will include installing the CRDs from the quick location first.
For Contour, we basically have an ongoing release cycle. The problem we need to solve is to make it easy and reliable for operators to stay on the upgrade cycle while also being able to customize their installs (at least a bit). So if we say that the upgrade is kubectl apply
the latest CRD, then upgrade helm, that's not the best experience IMHO. You won't be able to just helm install
on a new cluster, and helm upgrade
becomes more complex and risky.
I meant that we include the current version of the CRDs in the Helm chart, but, because of the Helm upgrade problem, we will need to have the upgrade instructions be "Reinstall the CRDs, then do helm upgrade
". This is because once the CRDs are present at a particular apigroup and version - ie projectcontour.io/v1
, Helm won't reinstall them, because of upgrade concerns.
We can be confident that reinstalling the CRDs is okay in this case, because we control the changes, and will only be adding things (by the GA API guarantee). I don't see any way around something like this for Helm, because of its CRD management behavior.
We can be confident that reinstalling the CRDs is okay in this case, because we control the changes, and will only be adding things (by the GA API guarantee). I don't see any way around something like this for Helm, because of its CRD management behavior.
Another way to update the CRDs could be to do it programatically inside the contour source code. I am not sure if this is a good practice but I think I already saw this in some other applications.
I've been exploring some possibilities for the CRD upgrade in the contour helm chart, and one way forward could be to separate out CRDs into a sub-chart, and then use that as a dependency for the main contour CRD.
As a test, I created a contour_crds
as a sub-chart:
templates
directory.kubectl get -o yaml crd
shows updated annotations in the updated CRD.
Given that there's the GA API guarantee(only adding to the CRD instead of taking stuff away), this could be a possible way to handle CRD upgrades that can work in both Helm 2 and 3.
Another way to update the CRDs could be to do it programatically inside the contour source code. I am not sure if this is a good practice but I think I already saw this in some other applications.
Hmm, that sounds pretty promising @georgegoh!
The corner cases I think are worth testing here are (doesn't seem like there could be any problems, but I've been surprised before):
We can be confident that reinstalling the CRDs is okay in this case, because we control the changes, and will only be adding things (by the GA API guarantee). I don't see any way around something like this for Helm, because of its CRD management behavior.
Another way to update the CRDs could be to do it programatically inside the contour source code. I am not sure if this is a good practice but I think I already saw this in some other applications.
Yup, I expect that contour-operator might take that approach.
Hmm, that sounds pretty promising @georgegoh!
@youngnick sorry it took so long to get back.
The corner cases I think are worth testing here are (doesn't seem like there could be any problems, but I've been surprised before):
- jumping a few versions in one go for the CRDs
I think this should work fine as the objects get replaced.
- uninstall
This seems to be a problem here. My test steps were:
What I observed was 1) the CRD was removed even though the httpproxy object was not deleted, 2) the httpproxy object created earlier did not return after I reinstalled the (deleted) chart and the CRDs came back.
- we don't really support downgrading of the CRDs, but what would happen if someone downgraded the Contour chart? Would the CRDs want to downgrade as well?
Based on what I seen I would expect that to happen. However, because there's no 'migration' taking place from newer to older versions, we would likely experience some loss of data/functionality.
What I observed was 1) the CRD was removed even though the httpproxy object was not deleted, 2) the httpproxy object created earlier did not return after I reinstalled the (deleted) chart and the CRDs came back.
This is relatively expected behavior; when the CRD itself is removed, all objects of that type will also be removed.
I guess the version upgrade thing will rely on us keeping the API guarantees. :)
These upgrade concerns are particularly important across CRD object version boundaries. I'm worried about what will happen across the CRDv1alpha1 to CRDv1 boundary (when we move HTTPProxy there soon). However, it seems like this approach would work, assuming that the multiple-version upgrades work as they seem to.
I guess the version upgrade thing will rely on us keeping the API guarantees. :)
Yes - at least until there's an operator :-)
These upgrade concerns are particularly important across CRD object version boundaries. I'm worried about what will happen across the CRDv1alpha1 to CRDv1 boundary (when we move HTTPProxy there soon). However, it seems like this approach would work, assuming that the multiple-version upgrades work as they seem to.
I'll continue work and aim to submit a PR to the bitnami repo by end of week and hope to get some reviews there too, so that we don't get blindsided by other issues.
Is this still a priority given our efforts around the Contour operator. I see the operator design already covers both the Service API and non- Service APIs use case which is the most important thing as we in ease in adoption. What additional advantages does the helm chart provide here. It would just be another deployment we'd have to maintain as a repo under the Contour project
The Helm chart that @georgegoh has built lives in Bitnami's chart repo. Since there is now no centralised chart repository, this gives us some Helm functionality at least.
I'd anticipate that as we make the operator stable, we can move the helm chart to use that instead.
potentially publish to hub.kubeapps.com