Closed ghost closed 4 months ago
If you don't have anything in that cluster, try resetting it (effectively reverting everything that KubeOne did, so you will lose all data in the cluster if there's any), and then run kubeone apply
with the verbose flag. That should provide some more details about what's going on.
For example:
kubeone reset -t . --destroy-workers=false
kubeone apply -t . -v
(-v
flag is verbose)
OK, I've just reverted everything with terraform destroy
to avoid anything getting into the way from previous runs.
Also checked that everything is indeed cleaned up (except for default VPC stuff).
And here is the verbose log:
That's a strange failure, but it might be related to your AWS configuration. Please take the following steps:
curl
-ing or wget
-ing the endpoint to see if you're going to get any response from the Kubernetes API serverEnable DNS resolution
and Enable DNS hostnames
options enabled for your default VPCSuch the error indicates that KubeOne cannot reach the Kubernetes API server which is required after the first control plane node is provisioned.
OK, for some reason my local privoxy got in the way (at 127.0.0.1:8118
).
I could perfectly reach the ELB directly with curl
:
Does KubeOne try to use the HTTP proxy to access URLs through a SSH tunnel?
Get "...": proxyconnect tcp: ssh: tunneling <<< SSH tunnel?
connection to: 127.0.0.1:8118 <<< my HTTP proxy
ssh: rejected: connect failed (Connection refused)
Btw. after unsetting the http_proxy
and https_proxy
env variables, it seems to be working.
Does KubeOne try to use the HTTP proxy to access URLs through a SSH tunnel?
Yes, we use a SSH tunnel to access the API server. That's done because the API endpoint might not be reachable publicly, so we default to using a SSH tunnel via a bastion/jump host. There's no way to disable this though. I'm not sure if you can somehow configure your proxy to allow this behavior instead.
Yes, we use a SSH tunnel to access the API server.
OK, so if I get this correct then the following seems to happen:
If this assumption is correct, then the SSH tunnel needs to make sure that those standard variables (they are case-insensitive) are not getting passed to SSH.
That's done because the API endpoint might not be reachable publicly, [...]
Oh, how do I actually configure it to go all private? And why isn't it the default? I couldn't find anything in the docs and just saw that all the EC2 machines get public IPs.
If this assumption is correct, then the SSH tunnel needs to make sure that those standard variables (they are case-insensitive) are not getting passed to SSH.
I'll look into how this exactly works and get back to you.
Oh, how do I actually configure it to go all private? And why isn't it the default? I couldn't find anything in the docs and just saw that all the EC2 machines get public IPs.
There are two ways to do that:
internal_api_lb
Terraform variable
kubeone proxy
command (run kubeone proxy -h
for more details)Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
We still need to look into the SSH issue /remove-lifecycle stale
Issues go stale after 90d of inactivity.
After a furter 30 days, they will turn rotten.
Mark the issue as fresh with /remove-lifecycle stale
.
If this issue is safe to close now please do so with /close
.
/lifecycle stale
/remove-lifecycle stale
We need to retitle and update the description of this issue to get https://github.com/kubermatic/kubeone/issues/2904#issuecomment-1717190442 fixed
KubeOne itself doesn't support local proxying, since we already tunneling http connections over the SSH. Unfortunately net/http.Client used by the kubernetes client lib picks up your local HTTPS_PROXY and uses it. I'm not sure if it's even possible to tunnel via the tunnel.
For normal access to the kube-apiserver please use kubeone proxy
. It established a direct tunnel and opens a local HTTPS proxy that one can use on the next terminal
export HTTPS_PROXY=http://localhost:8080
kubectl get node
What happened?
While trying to set up my first cluster an error occurred:
Expected behavior
The cluster gets initialized properly.
How to reproduce the issue?
What KubeOne version are you using?
Provide your KubeOneCluster manifest here (if applicable)
What cloud provider are you running on?
AWS
What operating system are you running in your cluster?
Amazon Linux
Additional information
In the AWS console I can see that the load balancer it tries to access is taken offline, since it is unhealthy due to 2 nodes not being in-service: