Open ivelichkovich opened 8 months ago
/triage accepted
Sounds reasonable to me.
/help
@killianmuldoon: This request has been marked as needing help from a contributor.
Please ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help
command.
A kubeconfig flag
Maybe consider --into-kubeconfig
to make the behaviour clear.
Wrong email previously ;) /assign
Hey team,
I did the changes and tested the kubeconfig updation part however I could not test it with actual capi-quickstart guide written here https://cluster-api.sigs.k8s.io/user/quick-start.html.
I have a 3 node kind cluster running
here is the output of describe cluster name
NAME READY SEVERITY REASON SINCE MESSAGE
Cluster/capi-quickstart False Warning ScalingUp 24m Scaling up control plane to 1 replicas (actual 0)
├─ClusterInfrastructure - DockerCluster/capi-quickstart-gfcrz False Warning LoadBalancerProvisioningFailed 24m 0 of 1 completed
├─ControlPlane - KubeadmControlPlane/capi-quickstart-t8fn6 False Warning ScalingUp 24m Scaling up control plane to 1 replicas (actual 0)
└─Workers
├─MachineDeployment/capi-quickstart-md-0-8txfm False Warning WaitingForAvailableMachines 24m Minimum availability requires 1 replicas, current 0 available
│ └─Machine/capi-quickstart-md-0-8txfm-qgxg7-m86sl False Info WaitingForInfrastructure 24m 0 of 2 completed
│ ├─BootstrapConfig - KubeadmConfig/capi-quickstart-md-0-9qlcb-l4gmv False Info WaitingForClusterInfrastructure 24m
│ └─MachineInfrastructure - DockerMachine/capi-quickstart-md-0-hm6dk-npxrn
└─MachinePool/capi-quickstart-mp-0-xl89r False Info WaitingForClusterInfrastructure 24m
└─MachinePoolInfrastructure - DockerMachinePool/capi-quickstart-mp-0-7kf48
some help would be appreciated. Thanks
/priority backlog
What would you like to be added (User Story)?
As a developer & operator & user I want to be able to load kubeconfigs into an existing kubeconfig. So that I can have a single kubeconfig as contexts of all my clusters for ease of switching between clusters.
Detailed Description
A kubeconfig flag in "clusterctl get kubconfig" which will create the file or add the kubeconfig as a new context.
Anything else you would like to add?
No response
Label(s) to be applied
/kind feature /area clusterctl