Open bernardgut opened 2 months ago
actually after investigating the issue further this is not due to the label facility. This is due to the kube-prometheus-stack
helm template --include-crds -n monitoring -f apps/kube-prometheus-stack.helm.yaml kps prometheus-community/kube-prometheus-stack --create-namespace | yq -i 'with(.cluster.inlineManifests.[] | select(.name=="monitoring-stack"); .contents=load_str("/dev/stdin"))' patches/monitoring-stack.yaml
then adding that patch to your cluster template.The missing pre/post hooks (which are not included in helm template ..,
as opposed to helm install ...
somehow break the cluster nodes (before the install even starts). It doesnt matter what you put in the helm values I think. But if you need a sample I can provide.
That is as far as I managed to debug this so I disabled the patch and it works. I edited this bug to reflect this but if you feel it is out of scope for Omni feel free to close it.
UPDATE: READ THE COMMENT BELOW FIRST
Is there an existing issue for this?
Current Behavior
If you create a cluster using
omnictl cluster template sync
with a machine class and machine labels:The nodes are stuck forever with :
and whenever they recieve a new command they will print
![image](https://github.com/siderolabs/omni/assets/1610489/a7153c74-3da5-406c-8a7e-1b28031aa8a0)
Expected Behavior
provision the cluster based on the selected nodes machineClass
Steps To Reproduce
hostname
, add bootstrap2 patch with basic extension config and certificate rotationo0
.o0
with filtero0
omnictl cluster template sync --file o0
watch your machines burn.
What browsers are you seeing the problem on?
No response
Anything else?
tested : happens in both omni 0.33 and 0.34. Talos 1.7.0