telepresenceio / telepresence

Local development against a remote Kubernetes or OpenShift cluster
https://www.telepresence.io
Other
6.61k stars 521 forks source link

Volume mount on windows does not work #3715

Closed Dakror closed 1 week ago

Dakror commented 3 weeks ago

Describe the bug

Trying to get telepresence working with my cluster, i cannot get the volume mounts to work on windows. I'm using microk8s 1.29, connected into the cluster node network via wireguard. To avoid a conflict, I'm using

telepresence connect --kubeconfig=<kubeconfig> --proxy-via all=<workload>

The connection is established, but no drive T is appearing with my volumes. My workload is using a cephFS-based PVC, but even with the ignore-volume... annotation, no drive is created (expecting the /var/run/secrets to show up at least).

Logs: log.zip

From the connector logs during a disconnect-reconnect cycle, i found

"C:\Program Files\SSHFS-Win\bin\sshfs-win.exe" sshfs-win cmd -ouid=-1 -ogid=-1 -F none -f -C -oConnectTimeout=10 -o follow_symlinks -o allow_root -o directport=45639 10.1.4.64:/tel_app_exports/api-service T:: exited with 1

which when run manually again, is invalid because of the double-mention of the command, but after removal of the error, it fails with connection refused. Might be a clue.

I've been using Bridge 2 Kubernetes, but they've announced they're archiving the project. So I'm looking for alternatives.

To Reproduce Steps to reproduce the behavior:

  1. telepresence connect
  2. telepresence intercept
  3. No T drive is showing up

Expected behavior Expecting to find a drive either in explorer or via cmd or via the server that im trying to debug

Versions (please complete the following information):

VPN-related bugs: wireguard was my first guess since i couldnt access the node ssh anymore once telepresence was connected, but this was fixed with the proxy-via parameter

thallgren commented 3 weeks ago

@Dakror , I'm not able to reproduce this problem. Can you please try setting your daemon log-levels to debug, rerun the intercept, and then provide the logs again?

Create (or modify existing) %USERPROFILE%\AppData\Roaming\telepresence\config.yml so that it contains this:

logLevels:
  userDaemon: debug
  rootDaemon: debug
Dakror commented 3 weeks ago

logs.zip Here you go. Looks to me like the connection to the SFTP server in the agent pod is not able to be established.

thallgren commented 3 weeks ago

Yes, I think this is caused by an oversight in the proxy-via implementation. The mount is trying to connect using the original (cluster side) IP. It should use the proxied IP here. I'll create a regression test for this and look into how to solve it.

Dakror commented 3 weeks ago

Thank you so much for the rapid response!

thallgren commented 1 week ago

Fix released in version 2.20.3

Dakror commented 1 week ago

Thank you so much, will give it a shot tomorrow!