cloudflare / cloudflared

Cloudflare Tunnel client (formerly Argo Tunnel)
https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/tunnel-guide
Apache License 2.0
8.82k stars 775 forks source link

Connecting through Cloudflare Access using kubectl issues - bad handshake #658

Closed ghost closed 2 years ago

ghost commented 2 years ago

Hello,

I am having some trouble using a cloudflared tunnel to connect to my kubernetes clusters. I have multiple existing k8s clusters hosted in AWS EKS with cloudflared running, and tunnels in each cluster that are currently routing to various http services, all of which are working as expected. However, when following the instructions in this article provided by cloudflare, I am seeing the error: ERR failed to connect to origin error='websocket: bad handshake' originURL=https://redacted.ai when attempting to connect via a client machine.

To Reproduce Steps to reproduce the behavior: I have followed all of the steps in the document provided above. From the start, I have:

  1. Created a zero trust policy for my application
  2. Installed the cloudflared deployment on my cluster
  3. Created and configured a tunnel using ingress rules
  4. Created a DNS record to route traffic to the tunnel
  5. Run the tunnel

The last step, attempting to connect via cloudflared access tcp, brings me to the error mentioned above. My ingress looks like this:

tunnel: redacted
credentials-file: /etc/cloudflared/creds/cloudflared
metrics: 0.0.0.0:2000
no-autoupdate: true
ingress:
  - hostname: redacted
    service: redacted (ROUTES TO HTTP SERVICE, WORKING CORRECTLY)
  - hostname: redacted
    service: redacted (ROUTES TO HTTP SERVICE, WORKING CORRECTLY)
  - hostname: test-url.redacted.ai
    service: tcp://kubernetes.docker.internal:6443
    originRequest:
      noTLSverify: true
      proxyType: socks
  # any traffic which didn't match a previous rule, and responds with HTTP 404.
  - service: http_status:404

Attempting to connect via another machine looks like this:

create connection to cloudflare: cloudflared access tcp --hostname test-url.redacted.ai --url 127.0.0.1:8080 Then in another terminal window: env HTTPS_PROXY=socks5://127.0.0.1:8080 kubectl get po

Expected behavior Expected behavior is to have the ability to run kubectl commands via this configuration.

Environment and versions Local machine attempting to connect is MacOS Montery v12.1. Kubernetes clusters are hosted in AWS using EKS managed node groups.

Logs and errors Aside from the errors mentioned above, I can see these logs from the cloudflared pod:

ERR  error="dial tcp: lookup kubernetes.docker.internal on 172.20.0.10:53: no such host" cfRay=7149a9a43f5d7dd2-LAX ingressRule=2 originService=tcp://kubernetes.docker.internal:6443
ERR Failed to handle QUIC stream error="dial tcp: lookup kubernetes.docker.internal on 172.20.0.10:53: no such host" connIndex=2

The IP 172.20.0.10:53 I believe corresponds to the default service kubernetes.default.svc.cluster.local.

Additional context Any help with this is greatly appreciated, as the cloudflare docs are very limited and there doesn't seem to be much information about this particular issue online.

ghost commented 2 years ago

An update on this post:

I’ve noticed that while the Cloudflare documentation states to use tcp://kubernetes.docker.internal:6443 as the target service in the ingress config, I believe for AWS this should be the EKS API server endpoint - in my case https://redacted.gr7.us-east-2.eks.amazonaws.com (though this needs to be TCP). However, using tcp://redacted.gr7.us-east-2.eks.amazonaws.com:443 as the target service provides me with the error Unable to connect to the server: EOF .

Since there is basically no feedback loop when debugging this, I’m not sure if this address is correct, or if the port is correct (AWS documentation points to the endpoint being served on port 443, not 6443 like the usual k8s default), or if I am even making it through the tunnel, as I am no longer seeing logs in the cloudflared pod. I may be moving in the opposite direction, but I’m trying everything I can at the moment since it seems I am a guinea pig for this particular use case.

tikrko commented 2 years ago

tcp://kuberntes.docker.internal is not valid in EKS and default service runs in :443, so try with:

service: tcp://kubernetes.default.svc:443

I've been able to run a cloudflared tunnel like that, the only issue is that neither kubectl port-forward nor kubectl exec work. You can see my related open issue: https://github.com/cloudflare/cloudflared/issues/648

ghost commented 2 years ago

@tikrko Thanks for the response - I've tried this as well, but this gives me the error Unable to connect to the server: EOF which I guess is closer than where I was before. I'll have to look into the WARP solution that CF mentioned in your issue.

sudarshan-reddy commented 2 years ago

@jaydeepappas1 . I would echo @nmldiegues' recommendation to use the warp service to transparently access your k8s cluster as well. I'm going to close this issue as a duplicate of #648 . Feel free to open one if you are having trouble getting k8s to work with warp to tunnel.