Closed neoakris closed 2 years ago
O right there was a 2nd feature request I thought of / potential question: Is there a way to specify deployment order? I've run into an issue in the past where order matters example: Scenario A:
Scenario B:
Does zarf have a built in mechanism along the lines of argocd's sync waves / flux's depends on / some way of guaranteeing order of deployment?
Components guarantee order. If you need to handle sequential deployments, just make multiple components. On the storageClass override, we probably need to document this more, but basically if you init with the storageClass it is persistent after that. What happens on subsequent package deploys is the storageClass is read from the state generated during init. Does that help at all?
zarf init
has some flags available to override the state and storage class is one of them.
zarf init --storage-class <some-string>
On component segmentation you can also use the scripts (before or after) if you have wait conditions beyond just k8s manifests being ready. See https://github.com/defenseunicorns/zarf/blob/e24c80d3bf16aff675834e736d8583aae907a1fa/packages/distros/k3s/zarf.yaml#L23 for an example of that.
"Components guarantee order." cool thanks
"If you init with the storageClass it is persistent after that." Where my confusion lies is that it's possible to skip the init stage entirely, and go straight for zarf package create. (that's what I did in my linked reference notes above)
yeah I saw that zarf init had --storage-class flag, but I didn't see it for zarf package create. I recommend it be available as an option for zarf package create, if not a global flag option.
What if you want to have more than 1 storage class?
Here's an example scenario of where one might want more than 1 storage class with apps like kafka, you might want to use use local-path storage, in addition to a default HA storage class like longhorn. Because of performance for 1, and 2 if you had 3 node HA kafka with replication, if you deployed it to longhorn you'd end up with 9x data replication, (3x replication at kafka level and 3x replication at longhorn HA storage class level.)
Also added a minor edit to the original topic
zarf init -h, shows --storage-class string Describe the StorageClass to be used So that probably let's me override the storage class, but only on cluster creation. (Edit/Update: My understanding is that the only time you'd want to use zarf init is if you want to have zarf bootstrap an opinionated deployment of a k3s cluster, since I'm deploying to a pre-existing RKE2 cluster I think I'm right to skip this, like I did in the referenced example.) zarf package create -h, doesn't have a corresponding --storage-class string
We discussed this and agreed to keep it only exposed on zarf init
. You actually are doing the exact same command only without the additional flags when you zarf package deploy zarf-init-amd64.tar.zst
. It's 100% the same code, just you lose the ability to add the init flags because we didn't want to pollute every deploy with flags that only applied to init--and the docs describe using init.
As far as multiple storage classes, I think that's out-of-scope for the intended behavior. Those template behaviors are really to help get zarf up and running and aren't really where we want to be any more than we need to. Any further customization I think belongs in Helm/Kustomize or a gitops solution.
Good clarification about out of scope and that ideal workflow is to keep customizations in helm/kustomize, makes a-lot of since.
Thanks for clarifying, I suspect I ran into an edge case vs anything fundamental, as now that I think about it in the majority of cases helm/kustomizations would be deployed which can be customized using their built in methodologies. (helm/kustomize usually offers a parameterized storage class), so I'm good with closing the issue.
Also, thanks for letting me know about zarf.yaml components preserving deployment order in top down fashion within a yaml list.
Is your feature request related to a problem? Please describe.
I'm trying to use zarf as part of a PlatformOne BigBang AirGap Quickstart How to Guide (which is a separate project in the same ecosystem.) I have logic which leverages rke2-ansible to do an internet disconnected bootstrap of a single node RKE2 cluster. (which has no storage class)
I tried to use
The registry didn't come up on RKE2, because zarf expects a default storage class to exist.
I did some manual workarounds to deploy this to establish a storage class kubectl apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.22/deploy/local-path-storage.yaml
The PVC associated with zarf's registry is forever stuck in pending because I don't have a default storage class.
Here's a point in time snapshot of my WIP if it helps to have additional context:
https://repo1.dso.mil/platform-one/big-bang/apps/sandbox/air-gap-deployment/-/blob/51a5fd1bfc1c7deeead24b2dd04624f492698fa3/airgap_quickstart.md#step-4-download-zarf-on-internet-connected-vm1-then-transfer-to-internet-disconnected-vm2
Describe the solution you'd like (background context):
I'd like the ability to bundle a storage class using zarf and explicitly tell zarf about my storage class since multiple storage classes in a cluster are possible.
Based on this https://github.com/defenseunicorns/zarf/blob/v0.17.0/packages/zarf-registry/registry-values.yaml zarf's storage class seems to be a variable that can be overridden, but I don't see how I can override it.
(the following is based on zarf-cli v0.17.0)
zarf init -h
, shows --storage-class string Describe the StorageClass to be used So that probably let's me override the storage class, but only on cluster creation. (Edit/Update: My understanding is that the only time you'd want to use zarf init is if you want to have zarf bootstrap an opinionated deployment of a k3s cluster, since I'm deploying to a pre-existing RKE2 cluster I think I'm right to skip this, like I did in the referenced example.)zarf package create -h
, doesn't have a corresponding --storage-class string(the ask):
Could the following feature be added, a global means of overriding zarf state variables? Either via global cli flags, env vars, or config file.
Describe alternatives you've considered I'm going to investigate replacing the example registry-values.yaml with a hard coded over ride for the storage class as a workaround. But I figured I'd point out that if you're going to have variables, maybe give users an easy way to customize their values.