cloudflare / cloudflared

Cloudflare Tunnel client (formerly Argo Tunnel)
https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/install-and-setup/tunnel-guide
Apache License 2.0
8.91k stars 785 forks source link

kubectl tunnel not working #333

Open RTodorov opened 3 years ago

RTodorov commented 3 years ago

Hi,

I've been trying to setup an Argo tunnel for exposing my Kube API but apparently, the socks5 solution is not working out for me.

This is the command I run on my kube host (origin):

cloudflared tunnel --hostname k8s.my-domain.com --url tcp://127.0.0.1:6443 --socks5=true

It runs fine. This is what I run in the client:

cloudflared access tcp --hostname k8s.my-domain.com --url 127.0.0.1:1234

Then, when I try to run kubectl with the SOCKS5 proxy in the client, this is what I get in the origin logs:

2021-03-14T19:01:53Z ERR 127.0.0.1:6443 is not a http service
2021-03-14T19:01:53Z ERR CF-RAY: 62ffc0d37f87d45f-HAM Proxying to ingress 0 error: Not a http service

A curl/kubectl to 127.0.0.1:6443 from within the origin works perfectly fine.

I'm using k3s with kubectl v.1.15.5.

I've set all possible log levels to debug but couldn't find any meaningful information.

Thanks for any help!

TownLake commented 3 years ago

Hi @RTodorov can you try setting a SOCKS5 environment variable to route the traffic to the cloudflared listener:

env HTTPS_PROXY=socks5://127.0.0.1:1234 kubectl get pods

If that works, you can save time with this alias going forward:

alias kubeone="env HTTPS_PROXY=socks5://127.0.0.1:1234 kubectl"

If that doesn't work or if you're already using that and I missed it in the description, could you share any additional configuration details about your client-side environment.

RTodorov commented 3 years ago

Hi @TownLake ,

Thanks for answering. I was indeed using the env var, the request is actually reaching the origin server via Cloudflare, the problem seems to be between cloudflared and my origin server. I've also tried the native kubectl proxy-url parameter on the ~/.kube/config file but the result is the same. The request reaches the origin but errors out.

The client-side environment is nothing special, I'm using the kube config file from the server, I just changed the server URL to be https://k8s.my-domain.com.

- cluster:
    certificate-authority-data: <my-cert-here>
    proxy-url: socks5://127.0.0.1:1234
    server: https://k8s.my-domain.com (I've tried adding :6443 here, no luck)
  name: k3s-local

On the server side, I'm using k3s.

Looks like the error happens here (https://github.com/cloudflare/cloudflared/blob/39065377b5d0593bd85ed3ff0698aca34fd1eb72/origin/proxy.go#L119-L123) but I wasn't able to tell what rule.Service.(ingress.HTTPOriginProxy) does.

chungthuang commented 3 years ago

Hi @RTodorov I recommend using named tunnel and ingress rules. First follow https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/create-tunnel to create a tunnel. Then in your config file, add the following

tunnel:  <tunnel name or ID that you created in the first step>
credentials-file:  <path to secret that was generated when you create the tunnel>
ingress:
- hostname: k8s.my-domain.com
  service: tcp://127.0.0.1:6443
  originRequest:
    ​proxyType: socks
- service: http_status: 404

You can read more about ingress rules and available settings in https://developers.cloudflare.com/cloudflare-one/connections/connect-apps/configuration/ingress.

RTodorov commented 3 years ago

Hi @chungthuang ,

I've tried this as well and the result is unfortunately the same. I spent my entire Sunday on this, read the ingress config params a hundred times, tried legacy tunnel creation, new method, the result was always the same.

I'm a bit intrigued if this is something related to k3s but I don't think so, if I expose the API via HTTP and authenticate using a kubernetes service account, it works fine but I really don't want to expose the API without the argo tunnel.

Is there anything else I can do to debug this issue further? the debug logs doesn't seem to help much.

Thanks!

chungthuang commented 3 years ago

I'm sorry to hear that. Can you try replacing the scheme in tcp://127.0.0.1:6443 to http? That will tell cloudflared to connect to 127.0.0.1:6443 over http instead of tcp.

RTodorov commented 3 years ago

@chungthuang if I do that then there's a problem with TLS handshake:

error="remote error: tls: handshake failure"

From what I understood from the Cloudflare documentation, the idea of using socks5 is exactly to avoid having the TLS handshake issue.

This is the excerpt from the documentation:

The proxy allows your local kubectl tool to connect to cloudflared via a SOCKS5 proxy, which helps avoid issues with TLS handshakes to the cluster itself. In this model, TLS verification can still be exchanged with the kubectl API server without disabling or modifying that flow for end users.

chungthuang commented 3 years ago

I would try with changing your ingress to service: https://127.0.0.1:6443. The TLS handshake will be between cloudflared and your origin, but not between your end user and your origin. If your origin is using a self-signed certificate, you can add notlsverify option

- hostname: k8s.my-domain.com
  service: https://127.0.0.1:6443
  originRequest:
    ​noTLSVerify: true

Another issue might be your kubectl version. proxy-url seem to only be available with kubectl 1.19+.

RTodorov commented 3 years ago

Hi @chungthuang. I think this is something with the latest versions. I've downgraded to cloudflared 2021.2.5 and now the TCP proxy works, or at least I'm getting this in the logs

{"level":"debug","time":"2021-03-22T22:20:29Z","message":"CF-RAY: 6342cebd6f944e7f-FRA Serving with ingress rule 0"}
{"level":"debug","time":"2021-03-22T22:20:29Z","message":"CF-RAY: 6342cebd6f944e7f-FRA Request content length 0"}
{"level":"debug","time":"2021-03-22T22:20:29Z","message":"CF-RAY: 6342cebd6f944e7f-FRA Status: 200 OK served by ingress 0"}
{"level":"debug","time":"2021-03-22T22:20:29Z","message":"CF-RAY: 6342cebd6f944e7f-FRA Response Headers map[Content-Type:[text/html; charset=utf-8] Date:[Mon, 22 Mar 2021 22:20:29 GMT]]"}
{"level":"debug","time":"2021-03-22T22:20:29Z","message":"CF-RAY: 6342cebd6f944e7f-FRA Response content length unknown"}

Unfortunately, on the client-side, I still get this:

2021-03-22T22:20:29Z ERR failed to connect to origin error="websocket: bad handshake" originURL=https://k3s.my-domain.com

and from kube client:

I0323 11:55:41.030312   77764 request.go:943] Got a Retry-After 1s response for attempt 1 to https://k3s.my-domain.com/api/v1/nodes
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding

I've upgraded both my k3s cluster and kubectl to latest (1.20.4), so it's not an issue with the proxy.

At this stage, I don't know what could be the issue, since I'm seeing the 200 OK from the k3s API in the cloudflared logs.

chungthuang commented 3 years ago

What is the cloudflared version on the client side? Can you try logging at debug level? I would expect a 101 response in the tunnel log if the client request reached the tunnel because it's establishing a websocket connection.

developerdino commented 3 years ago

I am also having the same issue as @RTodorov with the kubectl setup though cloudflared. Has any solution been forthcoming on this issue?

RTodorov commented 3 years ago

I kinda gave up... wasted too much time on this and the tool is clearly not working, at least not with k3s.

ajrpayne commented 2 years ago

I am also having this issue. Using aks.

ajrpayne commented 2 years ago

Fixed by upgrading to version 2021.12.1

tomhuang12 commented 2 years ago

@ajrpayne Hi, I am trying to tunnel Kube API server traffic through Cloudflared. The cluster is AKS. The error we are getting is E0318 16:31:20.724694 5944 azure.go:154] Failed to acquire a token: unexpected error when refreshing token: refreshing token: adal: Failed to execute the refresh request. Error = 'Post "https://login.microsoftonline.com/e85feadf-11e7-47bb-a160-43b98dcc96f1/oauth2/token": read tcp 127.0.0.1:52195->127.0.0.1:1234: read: connection reset by peer' Looks like it's not able to get a token after proxy. Have you encountered this?

ajrpayne commented 2 years ago

@ajrpayne Hi, I am trying to tunnel Kube API server traffic through Cloudflared. The cluster is AKS. The error we are getting is E0318 16:31:20.724694 5944 azure.go:154] Failed to acquire a token: unexpected error when refreshing token: refreshing token: adal: Failed to execute the refresh request. Error = 'Post "https://login.microsoftonline.com/e85feadf-11e7-47bb-a160-43b98dcc96f1/oauth2/token": read tcp 127.0.0.1:52195->127.0.0.1:1234: read: connection reset by peer' Looks like it's not able to get a token after proxy. Have you encountered this?

I haven't had this issue. I do run az aks get-credentials before tunneling.

bradyburke commented 1 year ago

I'm hitting the same issue on EKS (v1.24) and cloudflared v2023.1.0 (on both server and client) - any update on this? noTlsVerify is set to true, so I'm not sure where the handshake is failing

kimyvgy commented 1 week ago

I encountered the same issue with a local Kubernetes cluster created using kind and Cloudflared version 2024.9.1.

tunnel: phelab-pke
credentials-file: /etc/cloudflared/credentials.json

originRequest:
  connectTimeout: 10s

ingress:
  - hostname: wQtoqcQ6Ep.pke.phelab.com
    service: tcp://127.0.0.1:49016
    originRequest:
      proxyType: socks
      noTLSVerify: true
  - service: http_status:404
~/w/p/infra-system main> cloudflared access tcp --hostname OufOw28HGq.pke.phelab.com --url 127.0.0.1:1234
2024-09-20T13:52:02Z INF Start Websocket listener host=127.0.0.1:1234
2024-09-20T13:52:13Z ERR failed to connect to origin error="remote error: tls: handshake failure" originURL=https://OufOw28HGq.pke.phelab.com
2024-09-20T13:52:15Z ERR failed to connect to origin error="remote error: tls: handshake failure" originURL=https://OufOw28HGq.pke.phelab.com
2024-09-20T13:52:16Z ERR failed to connect to origin error="remote error: tls: handshake failure" originURL=https://OufOw28HGq.pke.phelab.com
2024-09-20T13:52:17Z ERR failed to connect to origin error="remote error: tls: handshake failure" originURL=https://OufOw28HGq.pke.phelab.com
2024-09-20T13:52:18Z ERR failed to connect to origin error="remote error: tls: handshake failure" originURL=https://OufOw28HGq.pke.phelab.com
2024-09-20T13:52:19Z ERR failed to connect to origin error="remote error: tls: handshake failure" originURL=https://OufOw28HGq.pke.phelab.com
2024-09-20T14:34:05Z ERR failed to connect to origin error="remote error: tls: handshake failure" originURL=https://OufOw28HGq.pke.phelab.com
2024-09-20T14:34:06Z ERR failed to connect to origin error="remote error: tls: handshake failure" originURL=https://OufOw28HGq.pke.phelab.com

Any update on this?

kimyvgy commented 5 days ago

I have discovered a workaround that allows me to access the Kube API at 127.0.0.1:40915 on Host 1 from Host 2:

  1. Expose the SSH Server on Host 1 using Cloudflared.
    
    tunnel: phelab-pke
    credentials-file: /etc/cloudflared/credentials.json

originRequest: connectTimeout: 10s

ingress:

ssh-server

port-forwarding via SSH

ssh -L 40915:127.0.0.1:40915 -N -f phelab > /dev/null 2>&1

3. Now, I can use kubectl from Host 2 to interact with Kube API on Host 1:
```bash
kubectl get pods

I'm also writing a shell script (Fish Shell) to quickly start and stop an SSH connection:

$ ptunnel start phelab-asia
Tunnel to 'phelab-asia' on port 40915 established successfully.
Access it at: https://127.0.0.1:40915

$ ptunnel start phelab-asia
Tunnel to 'phelab-asia' is already running at https://127.0.0.1:40915.

$ kubectl get ns
NAME                 STATUS   AGE
default              Active   6d1h
ingress-nginx        Active   6d
kube-node-lease      Active   6d1h
kube-public          Active   6d1h
kube-system          Active   6d1h
local-path-storage   Active   6d1h

$ ptunnel stop phelab-asia
Tunnel to 'phelab-asia' on port 40915 stopped successfully.