pymedusa / Medusa

Automatic Video Library Manager for TV Shows. It watches for new episodes of your favorite shows, and when they are posted it does its magic.
https://pymedusa.com
GNU General Public License v3.0
1.8k stars 276 forks source link

Reverse proxy apache #6608

Closed elpedriyo closed 5 years ago

elpedriyo commented 5 years ago

Hello,

I am currently having some trouble when configuring reverse apache proxy. At the moment I am using this config:

<VirtualHost 192.168.1.211:443> LoadModule proxy_module /usr/lib/apache2/modules/mod_proxy.so LoadModule proxy_http_module /usr/lib/apache2/modules/mod_proxy_http.so LoadModule proxy_wstunnel_module /usr/lib/apache2/modules/mod_proxy_wstunnel.so LoadModule ssl_module modules/mod_ssl.so ServerName SSLProxyEngine on SSLProxyVerify none SSLProxyCheckPeerCN off SSLProxyCheckPeerName off SSLProxyCheckPeerExpire off ProxyRequests Off SSLProxyEngine on SSLCertificateFile /etc/letsencrypt/live//fullchain.pem SSLCertificateKeyFile /etc/letsencrypt/live/****/privkey.pem

ProxyPass / http://192.168.1.207:8081/ ProxyPassReverse / http://192.168.1.207:8081/

ProxyPass /medusa/ws ws://192.168.1.207:8081/medusa/ws keepalive=On timeout=600 retry=1 acquire=3000

ProxyPassReverse /medusa/ws ws://192.168.1.207:8081/medusa/ws

ProxyPassReverseCookieDomain 127.0.0.1 %{HTTP:Host}

RewriteEngine on RewriteCond %{HTTP:Upgrade} =websocket [NC] RewriteRule ^/ws/(.*) ws://192.168.1.207:5051/ws/$1 [P,L]

But Medusa webpage is always showing this:

image

And not changing.

Any idea??

Kind regards

Apalasys commented 5 years ago

Dude, I'm having exactly the same issue using Nginx as a reverse proxy. I don't know how many hours of my life I've lost trying to get this working. The reverse proxy works fine for my Transmission access. I feel like giving up to be honest.

medariox commented 5 years ago

Did you try setting it up like explained in the wiki? https://github.com/pymedusa/Medusa/wiki/Reverse-Proxy-setup

Nodens- commented 5 years ago

I am having similar problems using the default configuration that has been working properly for ages. Every now and then medusa will get stuck at the loading page and also every now and then I will get a browser notification about lost connection. I am using Apache 2.4.39 as a reverse proxy/gateway and I'm accessing it from local lan.

This may be entirely unrelated and just a coincidence but this started happening after switching to python3.

Notice that restarting medusa (not apache) will fix it when it gets stuck and that there are not errors either in apache logs or application.log

p0psicles commented 5 years ago

Can you share Your init script?

Nodens- commented 5 years ago
[Unit]
Description=Medusa Daemon

[Service]
User=seedbox
Group=seedbox

Type=forking
GuessMainPID=no
ExecStart=/usr/bin/python3.7 /opt/medusa/start.py -q --daemon --nolaunch --datadir=/opt/medusa

[Install]
WantedBy=multi-user.target
p0psicles commented 5 years ago

Change type forking to simple

Nodens- commented 5 years ago

Is there any particular technical reason why simple should be used instead of forking? Systemd simple units have issues (mainly it doesn't properly detect some types of failures to start the service and systemd considers the service started while it hasn't).

As a sidenote I had been trying other things in order to troubleshoot, that browser connection failure popup notification led me to try and disable SSL on medusa and switch the reverse proxy to unencrypted communication internally while apache still serves SSL externally. This seems to have fixed the issues as I have not seen the problem yet. I will wait a bit before switching to simple in order to see if this is actually the problem.

p0psicles commented 5 years ago

I had allot of issues with the main python process crashing when using forking. As also it shows all threads as stopped in the server status page.

If you find a different way of using forking without these issues, please share!

p0psicles commented 5 years ago

My guess is it forkes the process twice.

Nodens- commented 5 years ago

I did a little bit of digging. While I'm far from an expert in python (I'm a c/c++ and asm engineer) I have a fairly good understanding of systemd and process daemonization.

Type=forking expects the process to daemonize itself by forking but here's where it gets tricky. The standard practice for writing daemons is to do this: fork(), setsid() then fork() again. This is done because when setsid() is called to detach from the controlling terminal, the child process becomes a session leader which means it can now acquire a controlling terminal. A second fork() is being done so that is not possible anymore. To clarify:

fork() -- so that the process calling setsid() is not a process group leader which is a requirement for calling setsid() setsid() -- so forked child detaches from terminal but has side-effect of become session leader fork() -- so daemon process is no longer session leader and can not acquire controlling terminal (something only session leaders can do).

So it's not systemd that does the double forking it's medusa itself and it's standard practice. I have checked the code in main.py, daemonize(self): and verified this.

systemd's Type=forking should handle this just fine as 90% of daemons are written exactly this way. I can only assume that this issue is python (possibly python3) or python3 + systemd specific. I hope this explanation helps someone who is far more familiar with python and medusa's code than me to figure it out.

For the time being the default should be set to Type=exec (without the --daemon switch, same parameters with simple type) instead of simple as that provides much better handling of startup errors.

The exec type is similar to simple, but the service manager will consider the unit started immediately after
the main service binary has been executed. The service manager will delay starting of follow-up units until
that point. (Or in other words: simple proceeds with further jobs right after fork() returns, while exec will
not proceed before both fork() and execve() in the service process succeeded.) Note that this means
systemctl start command lines for exec services will report failure when the service's binary cannot be
invoked successfully (for example because the selected User= doesn't exist, or the service binary is
missing).
medariox commented 5 years ago

Type=exec was only introduced in systemd 240, so that's not really an option for us. Considering that Type=simple is the default option for Type, I honestly doubt that there is anything to be concerned about. Refer to the current unit for details: https://github.com/pymedusa/Medusa/blob/master/runscripts/init.systemd#L45-L60

Nodens- commented 5 years ago

Ah right. I keep forgetting I'm running cutting edge while you must support slow moving distros as well.

The problem with simple is that systemd will consider it started even if the binary can not run (eg binary missing, user missing etc etc) and it will proceed with starting units that depend on this unit before the unit has actually finished initializing or even if its not running at all. Forking type is safe because the fork() call marks successful init.

The fact that simple is the default option doesn't say anything really. This happens because forking requires an actual daemonized process, notify requires the application to be built with support for it (messaging via sd_notify()) and exec has latency on start as it waits for success plus as you said it is not available on earlier versions. Simple is just a catchall default. You will find that extremely few unit files that come with packages actually use simple. 99% use either forking or notify because simple makes it almost impossible to build proper systemd unit dependencies and it just blindly assumes the process has started with everything that entails.

I will continue to chase this and find out why forking is not working properly but perhaps implementing sd_notify() messaging is the best course of action. https://github.com/bb4242/sdnotify

ripa1993 commented 5 years ago

I'm facing a similar issue which in my opinion is not related to the use of forking or simple Units.

I noticed using chrome dev tools that the web_root configuration value is not used correctly. In my setup that configuration value is set to web_root = /medusa but css and js are loaded from root, i.e. my.domain/css/... and not my.domain/medusa/css/....

This was working fine until a couple of days ago.

curl -vvv -L 127.0.0.1:8081
* Rebuilt URL to: 127.0.0.1:8081/
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:8081
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 302 Found
< Content-Length: 0
< Vary: Accept-Encoding
< Server: TornadoServer/5.1.1
< Location: /home/
< Date: Tue, 04 Jun 2019 20:53:21 GMT
< Content-Type: text/html; charset=UTF-8
<
* Curl_http_done: called premature == 0
* Connection #0 to host 127.0.0.1 left intact
* Issue another request to this URL: 'http://127.0.0.1:8081/home/'
* Found bundle for host 127.0.0.1: 0x557c6f35a0 [can pipeline]
* Re-using existing connection! (#0) with host 127.0.0.1
* Connected to 127.0.0.1 (127.0.0.1) port 8081 (#0)
> GET /home/ HTTP/1.1
> Host: 127.0.0.1:8081
> User-Agent: curl/7.52.1
> Accept: */*
>
< HTTP/1.1 200 OK
< Content-Length: 45020
< Vary: Accept-Encoding
< Server: TornadoServer/5.1.1
< Etag: "659d7bd35764076b57d48ab4aa8794d8e03a0cec"
< Date: Tue, 04 Jun 2019 20:53:21 GMT
< Content-Type: text/html; charset=UTF-8

I would expect the < Location: /home/ to be < Location: /medusa/home/

medariox commented 5 years ago

I don't understand how something like that just stops working by itself? I don't see how it is possible, unless you have changed something in your setup of course.

Nodens- commented 5 years ago

Are you sure that your reverse proxy configuration is correct and uses /medusa as webroot? Eg for my setup I'm specifically setting it up to server on the root of its own unique hostname..

ripa1993 commented 5 years ago

The only thing that probably changed is that I might have upgraded the packages on my system. I have two medusa setups on the same machine, with the same reverse proxy logic (different web roots), I assume both of them stopped working together.

If it could be useful, I can share my nginx/medusa configs and system packages

EDIT: solved my issue, somehow one of the two medusa setups reset its config (weird)