Closed ewallat closed 2 years ago
When you create a usercluster, the corresponding MachineDeployment object(s) will be created in user-cluster kube-api, and in there will be Networks
(hetzner) param. Please make sure that the networks ID listed there exist and you could get them with hcloud network describe <ID-FROM-MachineDeployment>
.
Hi @kron4eg,
thank you for your feedback.
Just for my understanding, where is the ID of the network taken from? The network ID was actually missing in the CRD, I added it manually and the nodes were created.
The moment I adjust the pool and increase it for example, the network ID is removed and the machine controller starts crashing again.
I have not had the opportunity to specify a network, is this perhaps more of a dashboard issue?
It looks like a complex bug of 2 (or even 3) components at the same time.
1) machine-controller webhook lacks the validation of presence of the network 2) kubermatic fails to provide network to MachineDeployment 3) dashboard fails to make network a required field
@ewallat here's the plan.
@kron4eg that sounds good.
Regarding the dashboard, I had a look once. There seems to be already the possibility to define networks. It is already merged and will probably come with version 2.17.
Yes, it's there, however it's "optional".
@ewallat can you describe how you fixed this on Hetzner? I am running into the same problem right now.
Hey @shibumi,
I have defined a standard network in my seed configuration:
spec:
datacenters:
hetzner-fsn1:
country: DE
location: Falkenstein 1 DC 14
spec:
hetzner:
datacenter: "fsn1-dc14"
network: "default"
hetzner-nbg1:
country: DE
location: Nürnberg 1 DC 3
spec:
hetzner:
datacenter: "nbg1-dc3"
network: "default"
I then create only one user cluster per hetzner cloud project and create the network "default" in advance. I use 192.168.0.0/16 as subnet, but you can also use a different one. Instead of default you can probably use network-1 or any other name. It has to match at the end.
In case of default as name you don't have to specify a network in the Kubermatic wizard anymore.
I think the whole thing is just a big workaround, but works quite well for me so far.
Yeah, it's a workaround, but the alternative is backward compatibility breaking change (https://github.com/kubermatic/machine-controller/pull/944).
@kron4eg Oh this is fine. I am not using Hetzner in production yet. Right now we are just using Hetzner for evaluation purposes. Our main goal is to host kubermatic on VMWare.
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
/remove-lifecycle rotten
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
/close
@kubermatic-bot: Closing this issue.
Hi together,
when creating a user cluster the machine controller crashes during the creation of a hetzner cloud server:
It seems to me that some information is not passed properly to the hetzner cloud go client, resulting in a nil pointer.
I could not successfully create a user cluster at hetzner.
Tested with a basic installation of Kubermatic CE in Version 2.16.7 and 2.16.8.
Test seed: