Open steled opened 1 year ago
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
/transfer-issue machine-controller
/kind feature
What happened?
I'm trying to setup a k8s cluster via KubeOne with the vSphere cloudProvider. The setup of the VMs is done via Terraform, see the output of the command
terraform output -json > tf.json
below:When I run
kubeone apply -m kubeone.yml -t tf.json -c credentials.yml
I get the following error message at the stepCreating worker machines...
:Expected behavior
I expect that the worker nodes will be created in the specified vSphere folder.
How to reproduce the issue?
Setup the KubeOne VMs via Terraform and use the following value in the
terraform.tfvars
file:folder_name = "/Customers/TEST/kubermatic/kubeone"
What KubeOne version are you using?
Provide your KubeOneCluster manifest here (if applicable)
What cloud provider are you running on?
VMware vSphere
What operating system are you running in your cluster?
Ubuntu 22.04
Additional information
If I update the value of the key
kubeone_workers.value.kkp-test-pool1.cloudProviderSpec.folder
in the filetf.json
to/DATACENTER/vm/Customers/TEST/kubermatic/kubeone
the creation of the worker nodes is working.I tried to setup the full path for the folder as the value in the
terraform.tfvars
file (folder_name = "/DATACENTER/vm/Customers/TEST/kubermatic/kubeone"
). But with this configuration it fails directly at the Terraform run with the following message:For me it looks like that the full folder path should be used as value for the key
kubeone_workers.value.kkp-test-pool1.cloudProviderSpec.folder
in thetf.json
file.