Open lukasredev opened 7 months ago
I would be happy to help with a fix, but would require some guidance :)
Anyone? :)
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
This seems to be still relevant :(
@rifelpet: Reopened this issue.
/reopen
Can you post logs from the kops-controller pods in kube-system? That is the component responsible for applying labels from instance groups to nodes.
For anyone looking into this bug, this is the controller that handles label updates, initialized here.
The only one having logs for the creating of the node in question are these logs:
❯ kubectl logs -n kube-system kops-controller-w9jkr
I0707 19:26:35.841870 1 main.go:241] "msg"="starting manager" "logger"="setup"
I0707 19:26:35.842276 1 server.go:185] "msg"="Starting metrics server" "logger"="controller-runtime.metrics"
I0707 19:26:35.844427 1 server.go:139] kops-controller listening on :3988
I0707 19:26:35.844717 1 server.go:224] "msg"="Serving metrics server" "bindAddress"=":0" "logger"="controller-runtime.metrics" "secure"=false
I0707 19:26:35.844977 1 leaderelection.go:250] attempting to acquire leader lease kube-system/kops-controller-leader...
I0707 19:34:47.495953 1 server.go:220] performed successful callback challenge with 10.10.0.11:3987; identified as minecraft-58d2077a7bd90d8
I0707 19:34:47.495985 1 node_config.go:29] getting node config for &{APIVersion:bootstrap.kops.k8s.io/v1alpha1 Certs:map[] KeypairIDs:map[] IncludeNodeConfig:true Challenge:0x40007680f0}
I0707 19:34:47.497375 1 s3context.go:94] Found S3_ENDPOINT="https://s3.nl-ams.scw.cloud", using as non-AWS S3 backend
I0707 19:34:47.651300 1 server.go:259] bootstrap 10.10.0.2:28728 minecraft-58d2077a7bd90d8 success
I0707 19:41:16.781699 1 server.go:220] performed successful callback challenge with 10.10.0.7:3987; identified as nodes-hel1-7e8746841cf8f905
I0707 19:41:16.792090 1 server.go:259] bootstrap 10.10.0.2:64068 nodes-hel1-7e8746841cf8f905 success
I0707 19:49:05.298120 1 server.go:220] performed successful callback challenge with 10.10.0.5:3987; identified as nodes-hel1-d91dc6bfd5aab64
I0707 19:49:05.298167 1 node_config.go:29] getting node config for &{APIVersion:bootstrap.kops.k8s.io/v1alpha1 Certs:map[] KeypairIDs:map[] IncludeNodeConfig:true Challenge:0x400088a5f0}
I0707 19:49:05.440257 1 server.go:259] bootstrap 10.10.0.2:2440 nodes-hel1-d91dc6bfd5aab64 success
I0707 20:05:50.277408 1 server.go:220] performed successful callback challenge with 10.10.0.11:3987; identified as minecraft-6c3b8cb63d629438
I0707 20:05:50.288817 1 server.go:259] bootstrap 10.10.0.2:20488 minecraft-6c3b8cb63d629438 success
with this instancegroup config:
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2024-07-07T18:44:57Z"
generation: 5
labels:
kops.k8s.io/cluster: midnightthoughts.k8s.local
name: minecraft
spec:
image: ubuntu-22.04
kubelet:
anonymousAuth: false
nodeLabels:
node-role.kubernetes.io/node: minecraft
machineType: cax31
manager: CloudGroup
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: minecraft
role: minecraft
role: Node
subnets:
- hel1
taints:
- app=minecraft:NoSchedule
using hetzner for the VMs and scaleway for the S3
/kind bug
1. What
kops
version are you running? The commandkops version
, will display this information.v1.28.1
2. What Kubernetes version are you running?
kubectl version
will print the version if a cluster is running or provide the Kubernetes version specified as akops
flag.v1.27.8
3. What cloud provider are you using? Hetzner
4. What commands did you run? What is the simplest way to reproduce this issue? Create the cluster
Add a new instance group with different node labels
Edit the instance group with the following config:
Update the cluster (including forcing a rolling update) with
5. What happened after the commands executed? Commands are successful, but node labels are not added.
The yaml representation of the newly created node is the following (only metadata)
6. What did you expect to happen? The node labels specified in the instance group
are not added to the nodes
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest. You may want to remove your cluster name and other sensitive information.8. Please run the commands with most verbose logging by adding the
-v 10
flag. Paste the logs into this report, or in a gist and provide the gist link here.9. Anything else do we need to know? I looked at some existing issues and found #15090 and it seems it might be a similar issue: If you compare how labels are generated for OpenStack here and for hetzner here it seems that the labels are not passed to the
nodeIdentity.Info
object.