Closed mkfdoherty closed 2 years ago
This isn't really helpful if the postgres server enforces ssl..
Just to add, which in many environments is just not possible given the security implications.
I have to say, I just don't understand the reasoning behind the original change that caused this - it's very definitely what I would class as "breaking", and all of the use cases we've seen in here are hardly what I would call edge cases. We're talking about proxying to arguably the world's most popular SQL server.
Right now we're stuck on an old version, this work around doesn't work for us.
On Mon, 17 Apr 2023, 09:06 Jakob Müller, @.***> wrote:
This isn't really helpful if the postgres server enforces ssl..
— Reply to this email directly, view it on GitHub https://github.com/kubernetes/kubectl/issues/1169#issuecomment-1510890454, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAJRMSTIU5UZA5433IP22TXBT2XLANCNFSM5M2MZNGA . You are receiving this because you are subscribed to this thread.Message ID: @.***>
This isn't really helpful if the postgres server enforces ssl..
I agree. It is just a workaround and does not address the original problem.
This issue needs reopening
seeing the same, on several pods.
Client Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.3", GitCommit:"434bfd82814af038ad94d62ebe59b133fcb50506", GitTreeState:"clean", BuildDate:"2022-10-12T10:47:25Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"darwin/arm64"}
Kustomize Version: v4.5.7
Server Version: version.Info{Major:"1", Minor:"24+", GitVersion:"v1.24.10-eks-48e63af", GitCommit:"9176fb99b52f8d5ff73d67fea27f3a638f679f8a", GitTreeState:"clean", BuildDate:"2023-01-24T19:17:48Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}
I am facing the same issue with the M1 Mac, this is issue needs reopening
same problem here. latest k8s, latest kubectl tried on several laptops and clusters
Same issue here. kubectl with kind. Port forwarding for port 443 works only once after which the connection is lost and all further requests are left hanging.
Same issue with M1 MBP, trying to port-forward a redis pod into local 6379 port unsuccessfully
I have the same issue here, port forwarding 9090 on my load balancer, I create a cluster using Kind.
But I solved with add extra mapping in my kind config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- containerPort: 9090
hostPort: 9090
protocol: TCP
- containerPort: 1833
hostPort: 1833
protocol: TCP
- containerPort: 4222
hostPort: 4222
protocol: TCP
- role: worker
- role: worker
- role: worker
For those watching, it's possible that this issue is the same as https://github.com/kubernetes/kubernetes/issues/74551, which people believe is actually an issue with the container runtime.
There is a proposed PR to fix containerd
here: https://github.com/containerd/containerd/pull/8418
Also, here is another related issue, for reference: https://github.com/kubernetes/kubectl/issues/1368
Since I'm seeing the same problem with port-forward closing after the first connection I resorted to downloading a much older version of kubectl to use for the port-forward: https://cdn.dl.k8s.io/release/v1.22.14/bin/linux/amd64/kubectl
Newer version of kubectl port-forward:
$ oc port-forward -n vizone-dev service/vizone-db 15432:5432
Forwarding from 127.0.0.1:15432 -> 5432
Forwarding from [::1]:15432 -> 5432
Handling connection for 15432
Handling connection for 15432
E0928 14:20:52.249249 3315375 portforward.go:406] an error occurred forwarding 15432 -> 5432: error forwarding port 5432 to pod 7d75b15749aa96e0d76632f20a00ad08aa2ff5482d10f1688c9c75ad6f60c669, uid : port forward into network namespace "/var/run/netns/60b9691c-10e7-42bc-8b14-1f38ee62268f": read tcp [::1]:38464->[::1]:5432: read: connection reset by peer
E0928 14:20:52.250059 3315375 portforward.go:234] lost connection to pod
1.22 version of kubectl port-forward with me connecting and exiting with psql twice. The port-forward remains running.
$ kubectl22 port-forward -n vizone-dev service/vizone-db 15432:5432
Forwarding from 127.0.0.1:15432 -> 5432
Forwarding from [::1]:15432 -> 5432
Handling connection for 15432
Handling connection for 15432
E0928 14:27:45.878971 3316749 portforward.go:400] an error occurred forwarding 15432 -> 5432: error forwarding port 5432 to pod 7d75b15749aa96e0d76632f20a00ad08aa2ff5482d10f1688c9c75ad6f60c669, uid : port forward into network namespace "/var/run/netns/60b9691c-10e7-42bc-8b14-1f38ee62268f": read tcp [::1]:32780->[::1]:5432: read: connection reset by peer
Handling connection for 15432
Handling connection for 15432
E0928 14:27:52.651387 3316749 portforward.go:400] an error occurred forwarding 15432 -> 5432: error forwarding port 5432 to pod 7d75b15749aa96e0d76632f20a00ad08aa2ff5482d10f1688c9c75ad6f60c669, uid : port forward into network namespace "/var/run/netns/60b9691c-10e7-42bc-8b14-1f38ee62268f": read tcp [::1]:49546->[::1]:5432: read: connection reset by peer
Trouble exist on k8s: 1.28.5 and kubectl: 1.29.1
$ kubectl -n postgres-operator port-forward pod/hippo-hippo-instance-x5ls-0 5432:5432
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432
E0119 19:52:37.036121 1412760 portforward.go:409] an error occurred forwarding 5432 -> 5432: error forwarding port 5432 to pod 8dacd19a0fc4a570997e394e1ec4b98d849ac129377f266c7377bef888d4511e, uid : failed to execute portforward in network namespace "/var/run/netns/cni-76d1769b-331f-b977-5ae3-7c24f677755d": read tcp4 127.0.0.1:46938->127.0.0.1:5432: read: connection reset by peer
E0119 19:52:37.080359 1412760 portforward.go:370] error creating forwarding stream for port 5432 -> 5432: EOF
error: lost connection to pod
Same for MariaDB and a simple port-forwarding. No SSL, using default options with JDBC.
E0130 14:27:35.925386 25068 portforward.go:406] an error occurred forwarding 3307 -> 3307: error forwarding port 3307 to pod cbe8b3812e0dd7e8c793f4395d80bc3a42679829b7ab8f2c3831eaf7f4003e2b, uid : failed to execute portforward in network namespace "/var/run/netns/cni-0e9c2674-0ed6-b133-3453-140c73d6899b": failed to connect to localhost:3307 inside namespace "cbe8b3812e0dd7e8c793f4395d80bc3a42679829b7ab8f2c3831eaf7f4003e2b", IPv4: dial tcp4 127.0.0.1:3307: connect: connection refused IPv6 dial tcp6: address localhost: no suitable address found E0130 14:27:35.999824 25068 portforward.go:234] lost connection to pod
Just ran into this issue running kube locally. The issue was fixed by increasing available memory usage to the node
Better than nothing workaround, wrap it in a while true:
while true; do kubectl port-forward ...; echo .; done
Same issue with both server and client are version v1.27.9
in old mac. Please reopen the issue. Thanks
I am also seeing the same issue with a simple nginx pod exposed on port 8080
same issue here:
# kubectl version
Client Version: v1.29.2
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.18.4-tke.25
====update====
Just realized that only localhost ports on target pods can be port-forwarded. It'd be good if we could have an option to allow all ports forward-able.
The related issues are:
It also looks like @sxllwx has a PR which might fix it: https://github.com/kubernetes/kubernetes/pull/117493
Same here, had to disable SSL mode in Pycharm Pro to fix the connection being dropped after first attempt
Having the same issue with port-forwarding since Kubernetes version 1.29.4: Everything works fine in current version, being 1.28.5. Kubectl versions: Any other suggestions/workarounds/actual fixes?
What happened:
When running Kubernetes
v1.23.1
on Minikube with kubectlv1.23.2
I experienced the following unexpected behaviour when trying to create a port-forward to a pod running an arbitrary service.kubectl version
:What we see is that after the first netcat connection successfully closes we get a lost connection to the pod and the port-forward closes:
kubectl port-forward
to pod output:We would expect the the connection to stay open as is the case with Kubernetes before
v1.23.0
.What you expected to happen: When running the test against EKS running Kubernetes version
v1.21.5-eks-bc4871b
we get the port-forward behavior we are use to. The port-forward remains open after the first successful netcat connection.kubectl version
:Notice how the kubectl version is v1.23.2 and the server version is v1.21.5-eks-bc4871b. EKS seems to manage version skew on its own somehow.
The output we get after opening multiple connections is what we expect. The connection is not closed after subsequent nc commands (don’t be alarmed by the connection refusal by PostgreSQL, we are not using the right protocol or credentials. We are just trying to test the port-forward behavior and this is a simple approach to express the issue).
kubectl port-forward
to pod output:As we can see the port-forward connection lasts for many netcat connections. This is the behavior we expect.
For completeness this was tested using Minikube running
v1.21.5
Kubernetes. The problem still exists if we don't take into account version skew. But if we match the kubectl and Minikube Kubernetes version tov1.21.5
then we get the expected behavior again of port-forwards remaining open past the first connection.How to reproduce it (as minimally and precisely as possible):
My test is as follows:
kubectl port-forward $POD_WITH_SERVICE 5432:5432
)nc -v localhost 5432
)Tests were conducted against Kubernetes versions (v1.21.5, v1.22.1 and v1.23.1) on Minikube using
minikube start --kubernetes-version=v1.21.5
. Usingminikube kubectl --
we can match the kubectl version to the Kubernetes version Minikube is using to avoid version skew. The problem I describe only appears when running Kubernetes above v1.23.0.Anything else we need to know?: Based on the above testing it would seem that there is a bug introduced in kubectl >
v1.23.0
which causes port-forwards to close immediately after a successful connection. This is a problem given the above test expects the old behaviour of long lasting kubectl port-forwards. My assumption is that this is a bug based on there being no mention of this behavior explicitly in CHANGELOG-1.23. so it may be a regression. Could someone please shed light on whether this is a regression or expected future behavior now for reasons unbeknown to me?Environment:
kubectl version
): Listed above based on my expectationsv1.21.5-eks-bc4871b
to verify behavior.cat /etc/os-release
): When testing locally on a Docker node: