Closed Dakror closed 1 week ago
@Dakror , I'm not able to reproduce this problem. Can you please try setting your daemon log-levels to debug, rerun the intercept, and then provide the logs again?
Create (or modify existing) %USERPROFILE%\AppData\Roaming\telepresence\config.yml
so that it contains this:
logLevels:
userDaemon: debug
rootDaemon: debug
logs.zip Here you go. Looks to me like the connection to the SFTP server in the agent pod is not able to be established.
Yes, I think this is caused by an oversight in the proxy-via implementation. The mount is trying to connect using the original (cluster side) IP. It should use the proxied IP here. I'll create a regression test for this and look into how to solve it.
Thank you so much for the rapid response!
Fix released in version 2.20.3
Thank you so much, will give it a shot tomorrow!
Describe the bug
Trying to get telepresence working with my cluster, i cannot get the volume mounts to work on windows. I'm using microk8s 1.29, connected into the cluster node network via wireguard. To avoid a conflict, I'm using
telepresence connect --kubeconfig=<kubeconfig> --proxy-via all=<workload>
The connection is established, but no drive T is appearing with my volumes. My workload is using a cephFS-based PVC, but even with the
ignore-volume...
annotation, no drive is created (expecting the /var/run/secrets to show up at least).Logs: log.zip
From the connector logs during a disconnect-reconnect cycle, i found
which when run manually again, is invalid because of the double-mention of the command, but after removal of the error, it fails with connection refused. Might be a clue.
I've been using Bridge 2 Kubernetes, but they've announced they're archiving the project. So I'm looking for alternatives.
To Reproduce Steps to reproduce the behavior:
Expected behavior Expecting to find a drive either in explorer or via cmd or via the server that im trying to debug
Versions (please complete the following information):
Output of
telepresence version
OSS Client : v2.20.0 OSS Root Daemon : v2.20.0 OSS User Daemon : v2.20.0 OSS Traffic Manager: v2.20.0 Traffic Agent : ghcr.io/telepresenceio/tel2:2.20.0Operating system of workstation running
telepresence
commands Win10 Pro 2H22 (10.0.19045)Kubernetes environment and Version [e.g. Minikube, bare metal, Google Kubernetes Engine] microk8s k8s 1.29 on bare metal, networking layer is kube-ovn, nodes are running fedora server 38 with normal amount of firewall settings
VPN-related bugs: wireguard was my first guess since i couldnt access the node ssh anymore once telepresence was connected, but this was fixed with the proxy-via parameter