Open bdelwood opened 2 years ago
Can be achieved by copying the file to the first controller node:
hosts:
- ssh:
address: controllernode-1.example.com
role: controller+worker
files:
- name: exampleManifest
src: manifest.yaml
dstDir: /var/lib/k0s/manifests/
perm: 0600
Does anyone know how often are manifests copied with the @volatilemolotov way synced? I'm trying to deploy CiliumBGPPeeringPolicy and CiliumLoadBalancerIPPool objects this way but they need to be deployed after cilium is successfully installed. I'm installing cilium as helm extension and I believe my manifests are not deployed because it happens too early when not all CRDs are deployed yet.
How often? I would assume each time you run k0sctl apply?
Maybe your case can be solved with hooks?
https://github.com/k0sproject/k0sctl?tab=readme-ov-file#spechostshooks-mapping-optional
Docs say it's being run on the remote host which doesn't seem helpful in my case. It would be easier to simply run
k0sctl kubeconfig > ~/.k8s/config
kubectl apply -f myfile.yaml
after successfull install but thanks for hint. Might be useful in other cases.
In my case cilium is installed manually (for the purpose here it can be considered the same as helm extension) and the rest is handled by fluxcd, including the cilium pool objects and rest. So perhaps a different tools i better suited for you also
Yeah I decided to move Cilium objects to repository handled by Flux. Using it's dependencies I'll be able to do it right after flux's install.
Good to hear, flux and similar tools allows for a more deliberate dependency chain which is appropriate here.
So just a small note here. Whoever tries to actually deploy manifests using method provided by @volatilemolotov just make sure to put your manifests in subdirectory.
Seems k0s looks for them in /var/lib/k0s/manifests/*/*.yaml
and not /var/lib/k0s/manifests/*.yaml
That's why in my case it didn't work. I hope this saves someone some time ;)
I am currently not using the manifest deployer, but the helm one supports an "order" key to ensure charts get installed in the correct order. Helm charts are not amazing, but especially when there's something available, easier (imho) than dealing with individual manifests. The uninstall and update stories are also a bit easier.
It probably doesn't account for everything (I haven't checked the code), but it worked well for us when we install our cloud provider as a lot depends on that being initialized. Haven't checked everything in full detail, but it worked rather fast/instant and did not require long waiting and pods being recycled.
Having said all this, I would suggest to wrap custom CRDs etc. into a chart and install them using the helm extension.
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
spec:
k0s:
config:
spec:
extensions:
helm:
repositories:
- ...
charts:
- name: cinder-csi
order: 2
chartname: cpo/openstack-cinder-csi
@till sure thing and thanks for your comment however I wanted to use manifest deployer to deploy stuff which isn't really something like helm release. Extra options for Cilium or Prometheus CRDs. Of course I could package it as helm release but it's just too much work to deploy simple yaml file. So I'm actually using manifest deployer like this currently: https://github.com/fenio/homelab/blob/591c2da58c4cdb033aa68ab02d7264b1c8a92a78/k0sctl.yaml#L13-L17
@fenio yeah, I understand. I found the manifests too unpredictable.
Btw, artifacts.hub has a chart, maybe you don't need to maintain it yourself: https://artifacthub.io/packages/helm/prometheus-community/prometheus-operator-crds
Otherwise, need to dive into the code how it works. If I remember correctly, it applies manifests again when they change. Don't remember what happens when it errors.
As far as I know, k0sctl doesn't support the manifest deployer, as it requires placing a
manifests
directory on controller nodes.Having the manifest deployer work similar to the helm extension, where charts are specified using
config.spec.extentions
, would be a nice addition to the usability of the manifest deployer.