Closed kuldazbraslav closed 2 years ago
This likely affects only bastions as other node roles contain some userdata.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
I wonder if this comes from missing this:
nodeLabels:
kops.k8s.io/instancegroup: bastions
Without this, kOps is not able to map instances to the IG.
So the bug here is that it's allowed to have an IG without that node label, I think.
Can you try adding the above?
/remove-lifecycle stale
@olemarkus I tried adding the label you suggested and nothing changed. According to the code it seems that the label is added automatically - not to the spec, but to the launch template.
However, I come up with a new observation:
kops create -f <specfile>
) contains UserData: ""
, according to output of aws ec2 describe-launch-template-versions
kops update cluster
, but the generated launch template does not contain UserData.kops update cluster
run, this change is reported:
Will modify resources:
LaunchTemplate/bastions.zoidberg.969747135942.futurama.wandera.cz
UserData <nil> -> <resource>
I tried to reproduce this and the issue only surfaced once I removed those labels. I'll try some more later.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
This came up in the "kops-users" channel of the "Kubernetes" Slack team.
/kind bug
1. What
kops
version are you running? The commandkops version
, will display this information.Version 1.21.4 (git-53e6bf3e5b0a77c78df2a7c60baca6e926fd5105)
2. What Kubernetes version are you running?
kubectl version
will print the version if a cluster is running or provide the Kubernetes version specified as akops
flag.v1.19.16
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
This spec is in S3:
Then I run
kops --state s3://<REDACTED> update cluster <REDACTED>
5. What happened after the commands executed?
Bastion instance group is reported with changes, even though in both cases there are no userdata either before or after.
If I run
kops update cluster --yes
, new launch template is generated and bastion IG requires rolling update.6. What did you expect to happen?
No changes reported, no rolling update required.
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest. You may want to remove your cluster name and other sensitive information.8. Please run the commands with most verbose logging by adding the
-v 10
flag. Paste the logs into this report, or in a gist and provide the gist link here.Sorry, I'm not able to redact sensitive values from that large amount of logs.
9. Anything else do we need to know?
I had look at the code briefly and found out the following:
null
) - this is used for actual statenil
!) for bastion nodes