Closed outbounder closed 4 years ago
Good morning, could it be that the port(28688) is already taken by another application? Is it possible that another instance of ktunnel is running and occupying this port?
Good morning sir :)
Well I've tried with different port (several times) but with no avail.
Also I'm trying to find the pod but neither kubectl describe pod <uid>
nor kubectl get pods
include ktunnel's pod...
For <uid>
I'm using the strange hash from the output logs (but doubt that is the case :) ).
Update: found the deployment and the ktunnel's pod (after realizing that they are with the exposed service name)
The pod's logs are as follows:
time="2020-02-06T08:36:33Z" level=error msg="Failed getting kubernetes config: stat /.kube/config: no such file or directory"
time="2020-02-06T08:36:33Z" level=info msg="Starting to listen on port 8000"
So that seems weird - ktunnel is listening on port 8000 :\ And it's weird because the only whey to change it is with an explicit flag.. i'm looking into this
@outbounder can you extract the deployment spec for the ktunnel server and paste it here please?
@omrikiei Sorry for the port 8000
mislead, I'm just testing with different ports ;)
Without executing ktunnel expose ... -p 8000
it listens on the default port 28688 as expected but still there is fatal error for kube/config
Here are up-to-date issue details
$ ktunnel expose landing-v2-webapp 8084:8084 -n outbounder -v
INFO[0000] Exposed service's cluster ip is: 10.31.245.162
INFO[0000] waiting for deployment to be ready
INFO[0006] All pods located for port-forwarding
INFO[0006] Waiting for port forward to finish
INFO[0006] Forwarding from 127.0.0.1:28688 -> 28688
Forwarding from [::1]:28688 -> 28688
INFO[0006] starting tcp tunnel from source 8084 to target 8084
E0206 10:53:04.831638 11448 portforward.go:400] an error occurred forwarding 28688 -> 28688: error forwarding port 28688 to pod 83b0bbe90c79f7a76c732bd60679318c3a4fe5a91fe3b97e666942db9aebe6eb, uid : exit status 1: 2020/02/06 08:53:04 socat[179274] E connect(5, AF=2 127.0.0.1:28688, 16): Connection refused
ERRO[0006] Error sending init tunnel request: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection closed
time="2020-02-06T08:53:09Z" level=error msg="Failed getting kubernetes config: stat /.kube/config: no such file or directory"
time="2020-02-06T08:53:09Z" level=info msg="Starting to listen on port 28688"
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: "2020-02-06T08:52:58Z"
generation: 1
labels:
app.kubernetes.io/instance: landing-v2-webapp
app.kubernetes.io/name: landing-v2-webapp
name: landing-v2-webapp
namespace: outbounder
resourceVersion: "109926777"
selfLink: /apis/apps/v1/namespaces/outbounder/deployments/landing-v2-webapp
uid: 12cd5c08-48be-11ea-9ad9-42010a840042
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/instance: landing-v2-webapp
app.kubernetes.io/name: landing-v2-webapp
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app.kubernetes.io/instance: landing-v2-webapp
app.kubernetes.io/name: landing-v2-webapp
spec:
containers:
- args:
- server
- -p
- "28688"
command:
- /ktunnel/ktunnel
image: quay.io/omrikiei/ktunnel:latest
imagePullPolicy: Always
name: ktunnel
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2020-02-06T08:53:04Z"
lastUpdateTime: "2020-02-06T08:53:04Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2020-02-06T08:52:58Z"
lastUpdateTime: "2020-02-06T08:53:04Z"
message: ReplicaSet "landing-v2-webapp-57d54f764f" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
@outbounder, I tried reproducing the same behaviour without success. Does your cluster have any security policy or or a security application?
Another thing I'm looking at is that by the server logs it started listening on 08:53:09 while socat failed at 08:53:04, but this assumption ignores any time differences between your local machine and the kubernetes node.
Hey @outbounder, just checking if you had a chance to play around with this functionality again...
Hey @omrikiei , I've just tested successfully with v1.1.10 so I guess this is some sort of false positive.
Closing the issue, will reopen in case the problem raises.
Thanks for the amazing work and wonderful support! :bowing_man:
Hi @omrikiei :)
Its me again, this time with the following:
I'll appreciate :heart_eyes: any clues as always :)
Kind regards, Boris