Closed pcmid closed 7 months ago
I added those s3 env to /etc/sysconfig/kops-configuration
manully, then restarted the kops-configuration service. The nodeup wokered fine and the node joined the cluster finally.
As a workaround, could you try using --dns=none
when creating the cluster?
As a workaround, could you try using
--dns=none
when creating the cluster?
Thanks for the reply. I successfully created a new cluster with --dns=none
.
For an existing cluster, can I update the cluster configuration file to set the dns
block like this? One thing that confuses me is why this problem is related to dns.
topology:
dns:
type: None
masters: private
This was somehow lost as part of the mitigation for https://github.com/kubernetes/kops/issues/15539. See this comment for guidance on on how to switch: https://github.com/kubernetes/kops/pull/15643#issuecomment-1637151077.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle stale
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/remove-lifecycle rotten
/close
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
lifecycle/stale
is appliedlifecycle/stale
was applied, lifecycle/rotten
is appliedlifecycle/rotten
was applied, the issue is closedYou can:
/reopen
/remove-lifecycle rotten
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
/kind bug
1. What
kops
version are you running? The commandkops version
, will display this information.2. What Kubernetes version are you running?
kubectl version
will print the version if a cluster is running or provide the Kubernetes version specified as akops
flag.3. What cloud provider are you using?
openstack
4. What commands did you run? What is the simplest way to reproduce this issue?
5. What happened after the commands executed? Timeout to wait woker node join cluster.
6. What did you expect to happen? The worker node successfully joins the cluster
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest. You may want to remove your cluster name and other sensitive information.8. Please run the commands with most verbose logging by adding the
-v 10
flag. Paste the logs into this report, or in a gist and provide the gist link here.9. Anything else do we need to know?
kops-configuration log
/etc/sysconfig/kops-configuration
When I checked this file, I found that these configurations were missing when compared with the good node.
cloud-init
When I checked cloud-init in node by running
curl http://169.254.169.254/latest/user-data/
, I found those env were missing too.