haugene / docker-transmission-openvpn

Docker container running Transmission torrent client with WebUI over an OpenVPN tunnel
GNU General Public License v3.0
4.08k stars 1.21k forks source link

Unable to access webUI (Synology DSM) #651

Closed ghost closed 5 years ago

ghost commented 5 years ago

I cannot access web UI at all.

I followed https://www.reddit.com/r/VPNTorrents/comments/900w78/xpost_rsynology_setting_up_the/ to set everything up. I can start the app in docker. The app is running in Docker fine.

NAS: 192.168.1.169 Router: 192.168.1.1 Client (PC): 192.168.1.130

I explicitly granted exemption Docker APP in DSM->Security>Firewall to be accessed by client on 192.168.1.130. (I previously also allowed for ranges from 192.168.1.1 to 192.168.1.255).

Docker app settings: Local Port 1: 32770 Local Port 1: 32771 Container Port 1: 8888 Container Port 2: 9091

I tried following and it did NOT work: http://192.168.1.169:32770 https://192.168.1.169:32770 http://192.168.1.169:32771 https://192.168.1.169:32771 http://192.168.1.169:8888 https://192.168.1.169:8888 http://192.168.1.169:9091 https://192.168.1.169:9091

Error message is always the same on all browsers, even when cache is cleared: "This site can't be reached".

Previously discussed issue doesn't offer solutions:https://github.com/haugene/docker-transmission-openvpn/issues/117

Anyone has solution?

Ascotg commented 5 years ago

I've got the same problem. Is your log also saying this:

transmission-remote: (http://localhost:9091/transmission/rpc/) Couldn't connect to server

My log is attached: transmission-openvpn.txt

ghost commented 5 years ago

Yes,

The port and defult GUI of the package is not set properly. Instructions do not work.

Just to be clear. This package is the ONLY package not working on docker. I run variety of packages on Docker and NEVER have issues accessing webGUI.

There must be something fundamentally wrong with how ports are configured. Either that, or web-GUI is missing alltogether.

That's disappointing, I wanted to send the developer contribution, but it doesn't seem that this package is support any longer.

I've got the same problem. Is your log also saying this:

transmission-remote: (http://localhost:9091/transmission/rpc/) Couldn't connect to server

My log is attached: transmission-openvpn.txt

haugene commented 5 years ago

Yeah, very disappointing I guess. This package is very much supported I would say, depends on how you define it. There's been 8 releases this year, with the latest being 6 weeks ago - how do you draw your conclusions? There are many of these issues and questions and they very rarely end up needing an update to the image, it's mostly about networking or docker knowledge. This image has been developed and refined for 4+ years and is now quite stable.

I have to give these kinds of posts some time to mature. In over half of the cases people just need time to fiddle a bit more and figure it out for themselves. But ok, let's have a look at your issue. You need to provide logs and your run command (or which variables your're running with) for me to say anything about this. You're basically saying "it's not working, how do I make it work?".

Have you read and tried suggestions from #354 or the related issues? When you say that this is the only (sorry, ONLY) package that is not working on Docker - I guess you don't have VPN in all the others? I'm guessing this is, as most of the other related issues, a networking problem where the VPN correctly tunnels outgoing traffic through the tunnel - but gets greedy and grabs your local traffic. We have the LOCAL_NETWORK to exclude a range of IPs but if there are multiple layers of routing or NAT (which it can in these virtual docker environments, especially on a NAS) then the destination address on the package is not the same in the container as it is once it's hit your network.

Have you tried the proxy solution described in the README? That creates a sibling container on the same network and you can proxy traffic through it to see if stuff is running. Because it is also a container it will be on the vpn-containers local network by default, no LOCAL_NETWORK needed.

You can also try to access the UI from within the container first, to see if it is responding at all. Exec into the container and run curl localhost:9091, what do you get?

As for @Ascotg, from your logs it seems like everything is working. The transmission-remote error is not critical as it only relates to port forwarding. It should work, but it's not the reason why you don't see the UI. I think it's the same as above.

Can you try the curl command inside the container to see if you get a response? Then try the proxy solution? I see you get "Permission denied" on transmission.log as well. That might be if you've first created the container running as root and added the UID/GID after. Then the files are created as root and you're now reading them as a regular user. You can try to delete existing or mount a new /data dir to see how that goes.

Ascotg commented 5 years ago

Dear Haugene,

Thank you for the post. Unfortunately it isn't working still.

capture

I've linked them as suggested (though the proxy was pre-set for Synology on port 80, I tried 8080, but that gave an Nginx _Bad gateway error). Yet whatever I tried, I couldn't get the Web UI.

Log file: *6 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://172.17.0.5:9091/favicon.ico", host: "192.168.0.20:9092", referrer: "http://192.168.0.20:9092/"

I've remove the LOCAL_NETWORK:192.168.0.0/24 variable since and then this error does not occur. Yet no UI.

This might be a stupid question but the User ID and Group ID you gave in the Synology example do I need to change them?

Kind regards, Ascotg

haugene commented 5 years ago

Hmm. So something is not right. A bit surprised as your logs looked good. But curl'ing on my machine gives the following:

root@daf23332b1bc:/# curl -I localhost:9091
HTTP/1.1 301 Moved Permanently
Server: Transmission
Location: /transmission/web/
Date: Tue, 04 Dec 2018 20:25:31 GMT

root@daf23332b1bc:/# curl localhost:9091/transmission/web/
<!DOCTYPE html>
<html>
        <head>
                <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
                <meta http-equiv="X-UA-Compatible" content="IE=8,IE=9,IE=10"><!-- ticket #4555 -->

...
...

First request gives a 301 redirect to /transmission/web/ and there I get the HTML.

Don't know what you can do. The UID/GID settings shouldn't matter too much at this point. But you could try to omit them. It will make transmission run as root, and outgoing files will also then be owned by root. Meaning that you just need to chown them before moving them around or doing everything with sudo commands.

But to narrow down where your problem lies it's good if you do a minimal setup. Normally I would ask you for the docker run command. But I guess you set this up in a NAS GUI somewhere?

As a run command I guess this is as basic as you get:

docker run --cap-add=NET_ADMIN --device=/dev/net/tun -d \
              -e OPENVPN_PROVIDER=PROVIDER \
              -e OPENVPN_USERNAME=user \
              -e OPENVPN_PASSWORD=pass \
              haugene/transmission-openvpn

Here you don't mount any folders, don't bind any ports, and the process is running as root. But exec'ing in and running curl localhost:9091 should still work. Can you try setting up something similar?

Ascotg commented 5 years ago

Wow, quick reply. Thanks.

I did an SSH to my Synology and ran the command.

We're making progress as the curl command gives some positive info: capture

Ascotg commented 5 years ago

Meanwhile I've adjusted it so it does port forwarding and the LOCAL_NETWORK is set right.

And yay, I can reach the UI now. However, I assume there'll be trouble with the fact that it's linked to root, no?

haugene commented 5 years ago

Well. First you need to add the volume mount to /data so that you can access the files from the host system. And then you will see that all downloaded files will be owned by root - yes.

How big of a problem that is depends on what you do with the files from there. It's a bit of a hassle because ordinary users and script can then not move the files without escalating to sudo/root privileges.

So you should set the UID/GID either to your user, or to a separate user you've created for transmission. Also set a group that grants access to the host users that needs it. From here it's purely a unix permissions question, and you can read up on that if you need to.

If you get issues again when adding UID/GID it can because there's files in your volume mounts that are created as root and then the user is suddenly not root anymore. So try starting with a clean slate. Mounting a folder where there is no previous setup from earlier containers.

Glad you got it working. I guess you'll be able to try out a bit going forward. But post again if you hit a wall ;)

Ascotg commented 5 years ago

Aha, I found it completely now.

Here's my solution:

The problem lied with the PUID and PGID as previously mentioned. To fix it, it goes as follows:

  1. Login using SSH to the synology NAS

  2. Once logged in type 'id' command, and write down the UID and GID numbers

  3. When executing the command to create the docker replace the UID and GID with the numbers found above.

Don't forget to link the port number and the LOCAL_NETWORK parameter and you're all set.

Thanks for the help Haugene and for the awesome software.

Kind regards,

I'll send you a virtual 'beer'

haugene commented 5 years ago

Great stuff! Thanks @Ascotg ;) I think I'll close this now. @omed3: Try the steps described above and see where it fails. Then re-open this issue if necessary.

ghost commented 5 years ago

Ave Caesar, IMPERATOR, morituri te salutant!.

Answers:

  1. The conclusion is drawn from the fact that container doesn't work when it's installed. I am sorry to hear that there were 8 releases that don't work. I am sorry to hear that it took 4+ years to work on something that doesn't work.
  2. Yes, that container is the only one with VPN.
  3. No, curl command brings up error. "curl: (7) Failed to connect to localhost port 9091: Connection refused root@haugene-transmission-openvpn3:/#" If other users, as evidenced on this post, had similar problem, I would have gathered that after 4+ years and 8 releases you would have made a detailed guide to make sure the issue doesn't happen. If VPN was active, why would you expect this command to work, before setting proxy?
  4. There are at least(!) three types of logs. (1) Log generated by haugene-transmission-openvpn" accessible from the container's "Log" tab on the top, (2) Log generated by Docker, accessible from the "Log" tab in the Docker's left menu, and (3) Log generated by Synology DSM OS, and available through Control Panel options (via multiple tabs, depending on the type of log).

You did not specify which log you want, nor how to get it.

Log (1) is below: _" haugene-transmission-openvpn date,stream,content 2018-12-05 04:14:45,stdout,"Wed Dec 5 04:14:45 2018 library versions: OpenSSL 1.0.2g 1 Mar 2016, LZO 2.08 " 2018-12-05 04:14:45,stdout,Wed Dec 5 04:14:45 2018 OpenVPN 2.4.6 x86_64-pc-linux-gnu [SSL (OpenSSL)] built on Apr 24 2018

2018-12-05 04:14:45,stdout,Wed Dec 5 04:14:45 2018 Note: option tun-ipv6 is ignored because modern operating systems do not need special IPv6 tun handling anymore.

2018-12-05 04:14:45,stdout,RTNETLINK answers: Operation not permitted

2018-12-05 04:14:45,stdout,adding route to local network 192.168.1.0/24 via 172.17.0.1 dev eth0

2018-12-05 04:14:45,stdout,Setting OPENVPN credentials...

2018-12-05 04:14:45,stdout,Starting OpenVPN using config at10.nordvpn.com.udp.ovpn

2018-12-05 04:14:44,stdout,Using OpenVPN provider: NORDVPN "_

  1. Yes I read README.txt, section "Access the WebUI", subsection "How to fix this", well before posting here. It does not solve the problem. Error: " bash: transmission-container: No such file or directory root@haugene-transmission-openvpn3:/# "

  2. "In over half of the cases people just need time to fiddle a bit more and figure it out for themselves" Can you be more specific? Specifically what should be "fiddled" with? I trust you thoroughly tested the container to know what could be fiddled with without without inadvertently exposing local IP?

Or do you mean "fiddle" to the point where someone inadvertently exposes their IP on the network and is suddenly not protected by VPN any longer, and don't even realize it?....which is what happened to this person from your thread . This defies the purpose of having VPN. Doesn't it?

  1. How do you know that >50% users who install container fiddle with it and make it work? Maybe 99.99% users download this, "fiddle" with it for 10-15 mins, then delete it from docker, and you effectively have <0.01% successful number of users!

If you were brought unconscious to the ER with internal bleeding, and a scalpel would give you 0.01% chances to survive, would you call that a success? Even, if that % was 50%, would you call that success? I bet you would insist on 100%. You want that for free, and you get it, without even thinking about it, or appreciating it, or being aware of how complex that is.

  1. The container was deployed in the most simplest, basic network layout and routing, that I trust most users have (Internet-Router with PC and NAS attached to the Router), with all default ports used that Synology and Docker use (nothing modified)...and it doesn't work.

It's possible to install Oracle database on Docker, set up all ports, user accounts, internal and external (out of home) access permissions, and access db either via native GUI, or via 3rd party Oracle client, form within our outside of home network in less than 30 minutes. Interestingly, the author did make a mistake in providing incorrect GUI link, which was promptly corrected by the author. Configuring access to Oracle db, running within Docker, was more complex, because permissions needed to be set at multiple points - within docker, within PhPAdmin, and within router - but HEY! the author had end-user in mind, and provided a guide with screen shots to guide the user. That container is open source, developed and offered for free. Yes, I donated money to the author.

Yes, I know that Oracle container doesn't have VPN, but if the solution to the problem is so simple and obvious, why not integrate the solution directly within the container, so that when user installs the container they can activate the proxy? Why require users to "fiddle" (whatever that word means), after 4+ years? I mean, wow!

Bittorrent or qTorrent provide ability to configure proxy directly from app GUI. They don't ask the user to work with command line to set that up. And both tools are open-source, and yes they accept donations, and yes, I did donate money.

I was impressed that multiple configurations were provided for VPN online, which saves the user the time to download config files...etc, but I am deeply disappointed that the most fundamental aspect of the container was not paid attention to, after 4+ years of development and 8 releases of stable container(assessed by whom?) - the aspect that allows the user to actually USE the container.

I bet that 99.99% of users download this container, "fiddle" with it, realize that this is a "risky" fiddle, delete it and move on. I bet that if this container was designed by a Swiss person, it would be like Swiss knife, ....just easy to use.

So next time you visit (or are wheeled into) ER, think of that Swiss knife, or scalpel - you wouldn't know the difference, would you? :-)

haugene commented 5 years ago

Ok. Sorry to hear that this seems hopeless to you. If you still want to make this work, I'm willing to help but then I think we should start again and change the tone a bit. I'm aware that I also contributed to a heated discussion, but I was caught with my guard down on how strongly you asserted that nothing was working and it could not under any circumstances be a local issue for your runtime.

Anyways. I agree that there are pitfalls when configuring LOCAL_NETWORK, and it's possible to set 0.0.0.0/0 and basically make "the internet" your local network. Maybe we should add a check against that, to avoid the biggest mistakes. But it will always be possible for people to break this, I can't guarantee against that and people need to understand what they're doing.

I would love to include a proxy inside the container that makes it easier for the end user so they don't have to do it. But have you understood why I can't do that? That would have the same issues as the transmission web server - and need LOCAL_NETWORK variable anyways. The whole point is that the proxy is another container on the same network. And setting LOCAL_NETWORK will not always fix the issue either as some NAS servers have a thin VM where the Docker daemon is run and then you need to route it through 2 NATs.

You might be right. That only 0.01% actually make it work, but I have a feeling that the numbers are better. My reference to people who figure it out is based on issues that are created and then closed by the author a week later stating that they got it working. For the rest, they might have given up.

This project is a relatively plug and play solutions in most cases. And apart from that it hopefully serves as a good starting point. Good luck, whichever setup your're going for. But if you come over a better way to solve the containerized VPN with a Web UI - let me know and I'll be happy to accept a PR or re-implement this image somehow.