Open texasbobs opened 2 years ago
I've seen this as well, when using gitops processes since this is dynamically created and not a static file, it's sort of an anti-pattern against Kubernetes and descriptive files.
You can see issues related to this topic in https://github.com/fluxcd/flux2/discussions/1916 for example
Any thoughts on making the CRD part of the included yaml file? We're not able to use GitOps with this configuration.
The PR I just linked would make the CRD manifest available as a GitHub release artifact. This is preferred over having it checked in to the repo, as it is actually generated dynamically by the controller at startup and does not exist in YAML format anywhere in the codebase.
That looks like it would work. Thanks for the PR. What do you think about the likelihood of it getting approved?
The PR I just linked would make the CRD manifest available as a GitHub release artifact. This is preferred over having it checked in to the repo, as it is actually generated dynamically by the controller at startup and does not exist in YAML format anywhere in the codebase.
So the CRD is not going to exist in the manifest? I'm not seeing any update there.
Having the crds checked into the repo would allow for Renovate to manage updates for it out of the box:
Currently you need to annotate the kustomization and have a custom regexManager for it to be picked up by Renovate, this is because there is no supported method to apply github download artifacts in Kustomize. See the kustomize docs on what resources are supported.
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
# renovate: datasource=docker image=rancher/system-upgrade-controller
- https://github.com/rancher/system-upgrade-controller/releases/download/v0.9.1/crd.yaml
Relevant renovate config:
"regexManagers": [
{
"description": "Process Kustomization CRD dependencies - Image and Github Release are the same version",
"fileMatch": ["cluster/crds/.+\\.ya?ml$"],
"matchStrings": [
"datasource=(?<datasource>\\S+) image=(?<depName>\\S+)\n.*?-\\s(.*?)\/(?<currentValue>[^/]+)\/[^/]+\n"
],
"datasourceTemplate": "docker"
}
]
Renovate supports kustomize urls like the following out of the box:
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- github.com/kubernetes-csi/external-snapshotter//client/config/crd?ref=v5.0.1
Could there be a github workflow to generate the CRDs from starting the controller?
I can already deploy system-upgrade-controller with Kustomize in a supported Kustomize resources way:
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- github.com/rancher/system-upgrade-controller?ref=v0.9.1
- plans
images:
- name: rancher/system-upgrade-controller
newTag: v0.9.1
patchesStrategicMerge:
# Delete namespace resource
- ./system-upgrade-patches.yaml
# Add labels
- |-
apiVersion: apps/v1
kind: Deployment
metadata:
name: system-upgrade-controller
namespace: system-upgrade
labels:
app.kubernetes.io/name: system-upgrade-controller
app.kubernetes.io/instance: system-upgrade-controller
---
# system-upgrade-patches.yaml
# Namespace already exists
# Delete the system-upgrade namespace
# from the kustomization
$patch: delete
apiVersion: v1
kind: Namespace
metadata:
name: system-upgrade
It would be nice if system-upgrade-controller had this folder structure to support the deploying the manifests or the crds with kustomize:
deploy/
crds/
kustomization.yaml
crds.yaml
manifests/
kustomization.yaml
system-upgrade-controller.yaml
Sorry for triple posting, but for the people using Flux it would be nice to do the following since using an http/s
resource in a kustomization
is not recommended to do.
---
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: GitRepository
metadata:
name: system-upgrade-controller-manifests
namespace: flux-system
spec:
interval: 30m
url: https://github.com/rancher/system-upgrade-controller
ref:
tag: v0.10.0
ignore: |
# exclude all
/*
# include kubernetes directory
!/deploy/crds
!/deploy/mainifests
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: cluster-apps-system-upgrade-controller-crds
namespace: flux-system
spec:
interval: 10m
path: ./deploy/crds
prune: true
sourceRef:
kind: GitRepository
name: system-upgrade-controller-manifests
wait: true
---
apiVersion: kustomize.toolkit.fluxcd.io/v1beta2
kind: Kustomization
metadata:
name: cluster-apps-system-upgrade-controller-app
namespace: flux-system
spec:
dependsOn:
- name: cluster-apps-system-upgrade-controller-crds
interval: 10m
path: ./deploy/manifests
prune: true
sourceRef:
kind: GitRepository
name: system-upgrade-controller-manifests
wait: true
Hopefully we can see this change come. Thanks
This projects uses a code-first approach to defining the CRDs. For this reason, the CRDs exist primarily as a release artifacts, generated automatically as part of the release process.
I suppose we could look at keeping a copy of these in another branch that was pushed to by releases, but this is not likely to be a high priority.
This runtime CRD creation trips us up as well: we'd like to install system-upgrade-controller
in a scaled-down state, to prevent it from performing upgrades right away while the system is still being provisioned -- we have a cronjob to scale it up automatically at the start of a maintenance window. However, if we do that, the Plan
of the system cannot be installed during provisioning because the CRD for it is still missing.
I understand you have reasons for this runtime CRD creation. Would it be an option to create a CLI argument (option or mode) for system-upgrade-controller
which would make it only do the CRD creation and stop immediately after that -- in particular without attempting any upgrades? We could then run that in a kubernetes job as part of the helm chart for example.
The Plan CRD has been available as a release artifact for a while now. Are you not able to use that for some reason?
Ah right yeah I can use that. It's slightly inconvenient, because the release version to get the CRD from is now separate from the controller/chart version from renovatebot's standpoint. It would be more natural to upgrade both at the same time by having the CRD be part of the chart somehow. But for now this works, thanks.
Can you please explain your resistance to including the CRD yaml file along with the documented deployment manifest? https://github.com/rancher/system-upgrade-controller/blob/master/manifests/system-upgrade-controller.yaml It seems so simple to just add the CRD there and make it work for everyone.
We are running version rancher/system-upgrade-controller:v0.6.2 of the system upgrade controller. The
deployment
creates the CRD forplan
at startup. This means that while the files are being applied to the cluster, the creation of any plans fails. This causes issues with tools like Flux which presents all of the YAMLs to kubernetes for validation. Kubernetes reports the error and the deployment fails.We would like the YAML for the CRD to be available instead of it being created at startup.