Closed cjkini closed 3 years ago
Sounds a good idea. It would be useful to be able to run rtun
and rtun-server
with docker/docker-compose. Maybe something like this?
docker run \
-e GATEWAY_URL=wss://example.com \
-e AUTH_KEY=0123... \
-e FORWARD=8080/tcp:192.168.1.10:8080 \
snsinfu/rtun?
I'll look into it.
Yes @snsinfu . You made a right point. 2 docker images required for agent and gateway.
I published docker images snsinfu/rtun and snsinfu/rtun-server. The usage is documented here.
In short:
# Server
docker run -it \
-p 9000:9000 \
-p 8080:8080 \
-e RTUN_PORT=9000 \
-e RTUN_AGENT="8080/tcp @ samplebfeeb1356a458eabef49e7e7" \
snsinfu/rtun-server
# Agent
docker run -it \
-e RTUN_GATEWAY="ws://0.1.2.3:9000" \
-e RTUN_KEY="samplebfeeb1356a458eabef49e7e7" \
-e RTUN_FORWARD="8080/tcp:192.168.1.10:8080" \
snsinfu/rtun
Hi @snsinfu Tq for this Docker Image. I will give myself a try by tomorrow evening and feedback you with my finding. Thanks ya.
Hi @snsinfu Tq for this Docker Image. I will give myself a try by tomorrow evening and feedback you with my finding. Thanks ya.
Hi @snsinfu , I tested this however, my when i try to SSH using the forwarded IP. The connection refused with this error " Tunnelling error: dial tcp 127.0.0.1:22: connect: connection refused"
This is my rtun gw command :
docker run --restart unless-stopped \ -p 9000:9000 \ -p 10022:10022 \ -e RTUN_PORT=9000 \ -e RTUN_AGENT="10022/tcp @ a79a4c3ae4ecd33b7c078631d3424137ff332d7897ecd6e9ddee28df138a0064" \ snsinfu/rtun-server
This is my rtun agent command : docker run --restart unless-stopped \ -e RTUN_GATEWAY="ws://myserverip:9000" \ -e RTUN_KEY="a79a4c3ae4ecd33b7c078631d3424137ff332d7897ecd6e9ddee28df138a0064" \ -e RTUN_FORWARD="10022/tcp:127.0.0.1:22" \ snsinfu/rtun
You need to specify the host IP address instead of 127.0.0.1 because from rtun's point of view 127.0.0.1 is the container itself, not the host.
So, if 192.168.1.2 is the IP address of the machine that runs rtun, the forwarding rule becomes 10022/tcp:192.168.1.2:22
.
10022/tcp:host.docker.internal:22
should also work on Windows and mac.
Maybe docker run --net=host
is easier but I have not experimented.
I see. In that case the machine that run agent must always has fixed ip right? In my case I am experimenting it with iot gateway or raspberry pie that are connected to public Internet with dynamic ip. Hence, I am using this tunnel service to ensure all my gateway or pie can be access via fixed ip provided by this tunnel service
With the default network mode, yes. There seem to be workarounds though.
In my testing, changing network mode to --net=host
did the trick too. Could you try the following?
docker run --net=host --restart unless-stopped \
-e RTUN_GATEWAY="ws://myserverip:9000" \
-e RTUN_KEY="a79a4c3ae4ecd33b7c078631d3424137ff332d7897ecd6e9ddee28df138a0064" \
-e RTUN_FORWARD="10022/tcp:127.0.0.1:22" \
snsinfu/rtun
Hi @snsinfu .
I tested this with --net=host argument, the agent manage to connect to gateway,create tunnel and i can shh to the device via the new port forwrd ip and port.
However, if I exit it (ctrl+Z) from that agent docker, and reconnect back. I cant connect back with the port forward IP and Port. The error at agent was " Agent error "websocket: close 1000 (normal)" - recovering..." and the error ate gateway was "{"time":"2021-02-04T15:02:38.022768508Z","level":"ERROR","prefix":"echo","file":"responder.go","line":"42","message":"listen tcp :10022: bind: address already in use"}"
Any idea why this thing occured?
Thanks for your feedback @snsinfu
Yeah, I see similar errors. In my case, the agent fails two times and eventually recovers in 20 seconds or so.
$ docker run --net=host -e RTUN_GATEWAY='ws://localhost:9000' -e RTUN_KEY='sample' -e RTUN_FORWARD='8080/tcp:localhost:22' snsinfu/rtun
2021/02/04 17:07:24 Listening on remote port: 8080/tcp
2021/02/04 17:07:24 Agent error "websocket: close 1000 (normal)" - recovering...
2021/02/04 17:07:34 Listening on remote port: 8080/tcp
2021/02/04 17:07:34 Agent error "websocket: close 1000 (normal)" - recovering...
2021/02/04 17:07:44 Listening on remote port: 8080/tcp
2021/02/04 17:07:49 Tunneling remote connection from 172.17.0.1:60550 to localhost:22
...
This occurs because Ctrl+Z immediately kills rtun
without cleanly closing connection to rtun-server
. So, rtun-server
keeps the old connection until timeout (10-20 seconds). Until the old connection is cleared up, rtun
can't reconnect to the server.
I know it's frustrating, but waiting for ~20 seconds should fix the broken tunnel.
Hi @snsinfu
You are right, We have to wait for 20 sec, the tunnel will be up.
Btw have you test with "restart always" argument? The tunnel seem not to be connected automatically after restart and even manual docker command firing also lead to more than 20 sec tunnel re connection.
This is my command " docker run --restart always --net=host -e RTUN_GATEWAY="ws://myip.com.ml" -e RTUN_KEY="a79a4c3ae4ecd33b7c078631d3424137ff332d7897ecd6e9ddee28df138a0064s" -e RTUN_FORWARD="10022/tcp:127.0.0.1:22" snsinfu/rtun "
This is the result : " 2021/02/05 03:03:50 Listening on remote port: 10022/tcp 2021/02/05 03:03:50 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:04:00 Listening on remote port: 10022/tcp 2021/02/05 03:04:00 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:04:10 Listening on remote port: 10022/tcp 2021/02/05 03:04:10 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:04:20 Listening on remote port: 10022/tcp 2021/02/05 03:04:20 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:04:30 Listening on remote port: 10022/tcp 2021/02/05 03:04:30 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:04:40 Listening on remote port: 10022/tcp 2021/02/05 03:04:40 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:04:50 Listening on remote port: 10022/tcp 2021/02/05 03:04:50 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:05:00 Listening on remote port: 10022/tcp 2021/02/05 03:05:00 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:05:10 Listening on remote port: 10022/tcp 2021/02/05 03:05:10 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:05:20 Listening on remote port: 10022/tcp 2021/02/05 03:05:20 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:05:30 Listening on remote port: 10022/tcp 2021/02/05 03:05:30 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:05:40 Listening on remote port: 10022/tcp 2021/02/05 03:05:40 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:05:50 Listening on remote port: 10022/tcp 2021/02/05 03:05:50 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:06:00 Listening on remote port: 10022/tcp 2021/02/05 03:06:00 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:06:10 Listening on remote port: 10022/tcp 2021/02/05 03:06:10 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:06:20 Listening on remote port: 10022/tcp 2021/02/05 03:06:20 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:06:30 Listening on remote port: 10022/tcp 2021/02/05 03:06:30 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:06:40 Listening on remote port: 10022/tcp 2021/02/05 03:06:40 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:06:50 Listening on remote port: 10022/tcp 2021/02/05 03:06:50 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:07:00 Listening on remote port: 10022/tcp 2021/02/05 03:07:00 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:07:10 Listening on remote port: 10022/tcp 2021/02/05 03:07:10 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:07:20 Listening on remote port: 10022/tcp 2021/02/05 03:07:20 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:07:30 Listening on remote port: 10022/tcp 2021/02/05 03:07:30 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:07:40 Listening on remote port: 10022/tcp 2021/02/05 03:07:40 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:07:50 Listening on remote port: 10022/tcp 2021/02/05 03:07:50 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:08:00 Listening on remote port: 10022/tcp 2021/02/05 03:08:00 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:08:10 Listening on remote port: 10022/tcp 2021/02/05 03:08:10 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:08:20 Listening on remote port: 10022/tcp 2021/02/05 03:08:20 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:08:30 Listening on remote port: 10022/tcp 2021/02/05 03:08:30 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:08:40 Listening on remote port: 10022/tcp 2021/02/05 03:08:40 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:08:50 Listening on remote port: 10022/tcp 2021/02/05 03:08:50 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:09:00 Listening on remote port: 10022/tcp 2021/02/05 03:09:00 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:09:10 Listening on remote port: 10022/tcp 2021/02/05 03:09:10 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:09:20 Listening on remote port: 10022/tcp 2021/02/05 03:09:20 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:09:30 Listening on remote port: 10022/tcp 2021/02/05 03:09:30 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:09:40 Listening on remote port: 10022/tcp 2021/02/05 03:09:40 Agent error "websocket: close 1000 (normal)" - recovering... 2021/02/05 03:09:50 Listening on remote port: 10022/tcp 2021/02/05 03:09:50 Agent error "websocket: close 1000 (normal)" - recovering... "
No, I haven't tested with --restart always
. But from the log I suspect that you are running multiple rtun
containers. Would you check the output of docker ps
? Maybe an old container with --restart always
is hanging around and occupying the tunnel.
I think the issue was possibly due to https://github.com/snsinfu/reverse-tunnel/issues/11. It's mitigated in v1.3.0.
Docker images have been released, so I close this issue. Thank you for the suggestions and feedbacks!
Hi @snsinfu .
Do you plan to develop this as docker image?
tq