jlesage / docker-nginx-proxy-manager

Docker container for Nginx Proxy Manager
MIT License
756 stars 166 forks source link

HTTP can be accessed on the Internet, but HTTPS cannot be accessed #291

Closed lovecxe closed 1 year ago

lovecxe commented 1 year ago

你好 为什么安装好后证书也正常外网HTTP可以访问HTTPS访问不了是哪里秘钥设置好吗

Livefour2day commented 1 year ago

I have the same issue since the latest update version 2.9.22 I now get error 502 on all of my HTTPS sites, they all worked before this update.

ther3zz commented 1 year ago

Same here. Seeing the following error on the proxy thats sending traffic to NPM which results in 502: SSL_do_handshake() failed (SSL: error:14094458:SSL routines:ssl3_read_bytes:tlsv1 unrecognized name:SSL alert number 112) while SSL handshaking to upstream,

(flow is internet -> cloudflare -> VPS Proxy -> NPM on local host -> Docker on local host)

Reverting to v23.03.2 fixed the issue...

jlesage commented 1 year ago

Please provide more details about the problem: what is your proxy host config, where the error is coming from , etc.

ther3zz commented 1 year ago

The error I mentioned showed up on my remote proxy (internet -> cloudflare -> VPS Proxy -> NPM on local host -> Docker on local host)

Here's the proxy host config (keep in mind all of the host configs were getting this issue, this is just one of them -- they're all pretty much the same though)

Please do let me know if you need anything else!

# ------------------------------------------------------------
# app.mydomain.com
# ------------------------------------------------------------

server {
  set $forward_scheme http;
  set $server         "172.19.0.4";
  set $port           5055;

  listen 8080;
listen [::]:8080;

listen 4443 ssl http2;
listen [::]:4443 ssl http2;

  server_name app.mydomain.com;

  # Let's Encrypt SSL
  include conf.d/include/letsencrypt-acme-challenge.conf;
  include conf.d/include/ssl-ciphers.conf;
  ssl_certificate /etc/letsencrypt/live/npm-39/fullchain.pem;
  ssl_certificate_key /etc/letsencrypt/live/npm-39/privkey.pem;

  # Block Exploits
  include conf.d/include/block-exploits.conf;

  # HSTS (ngx_http_headers_module is required) (63072000 seconds = 2 years)
  add_header Strict-Transport-Security "max-age=63072000; preload" always;

    # Force SSL
    include conf.d/include/force-ssl.conf;

  access_log /config/log/proxy-host-4_access.log proxy;
  error_log /config/log/proxy-host-4_error.log warn;

server_tokens off;

  location / {

  # HSTS (ngx_http_headers_module is required) (63072000 seconds = 2 years)
  add_header Strict-Transport-Security "max-age=63072000; preload" always;

    # Proxy!
    include conf.d/include/proxy.conf;
  }

  # Custom
  include /data/nginx/custom/server_proxy[.]conf;
}
lovecxe commented 1 year ago

Hello, do you want to modify the configuration like yours?

ther3zz commented 1 year ago

Hello, do you want to modify the configuration like yours?

Instead of using the latest image from docker, use any previous version and it seems to fix it. For example, I went back one version to v23.03.2 and can access things now.

That being said, it looks like I can no longer log into NPM admin portal. I receive a red "Bad Gateway" error under the password field when I click submit in the login page. I'm not really finding any logs for that though (Didn't really see anything in the default-host_access.log or default-host_error.log).

EDIT: Seems like the login form is POSTing to http://XXX.XXX.XXX.XXX/api/tokens which throws the 502 and results in that red error mentioned

Livefour2day commented 1 year ago

I have the same config as ther3zz none of my set-up proxy hosts work with my running containers if I click the "Source" domain button in NPM on any of the Proxy hosts I get error 502 bad gateway. I don't want to return to an older version due to the security issue identified. But unfortunately, something has changed with this update.

eSeR1805 commented 1 year ago

Seeing the same on my end after auto-update. Reverting to nginx-proxy-manager:v23.03.2 seems to be working fine. My setup is like this in Unraid: Cloudflare DNS ---(zero trust tunnel)---> cloudflared docker ---> NPM docker ---> service docker/VM. Cloudflared was reporting the following: 2023-04-09T10:30:25Z ERR error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: EOF" cfRay=7b521318eb83dd54-LHR ingressRule=0 originService=https://10.10.10.20:18443 NPM logs was not showing anything abnormal. I can probably reproduce the issue and I'm willing provide more info if needed. (I'll need to know what is of interest and how to obtain it)

Livefour2day commented 1 year ago

I have also reverted back to V23.03.2 to test and check to see if all works again and it does. I just can't figure out what is wrong with the latest version. Dont want to use an older version as there is a security risk that has been identified but I am unsure which version fixed it.

ther3zz commented 1 year ago

I have also reverted back to V23.03.2 to test and check to see if all works again and it does. I just can't figure out what is wrong with the latest version. Dont want to use an older version as there is a security risk that has been identified but I am unsure which version fixed it.

Are you able to access the admin portal after reverting?

Livefour2day commented 1 year ago

I have also reverted back to V23.03.2 to test and check to see if all works again and it does. I just can't figure out what is wrong with the latest version. Dont want to use an older version as there is a security risk that has been identified but I am unsure which version fixed it.

Are you able to access the admin portal after reverting?

Yes I have full access and everything works as it did before the update.

ther3zz commented 1 year ago

I have also reverted back to V23.03.2 to test and check to see if all works again and it does. I just can't figure out what is wrong with the latest version. Dont want to use an older version as there is a security risk that has been identified but I am unsure which version fixed it.

Are you able to access the admin portal after reverting?

Yes I have full access and everything works as it did before the update.

When I go back a version, I see these errors in docker over and over again: at least I can still access my sites for now but once the certs expire, I'm sure I'll get errors again

[app         ] [4/10/2023] [10:20:22 AM] [Global   ] › ℹ  info      Manual db configuration already exists, skipping config creation from environment variables

[app         ] [4/10/2023] [10:20:22 AM] [Migrate  ] › ℹ  info      Current database version: none

[app         ] [4/10/2023] [10:20:23 AM] [Global   ] › ✖  error     Command failed: pip install certbot-dns-cloudflare==$(certbot --version | grep -Eo '[0-9](\.[0-9]+)+') cloudflare

[app         ] An unexpected error occurred:

[app         ] pkg_resources.VersionConflict: (certbot 2.3.0 (/usr/lib/python3.10/site-packages), Requirement.parse('certbot>=2.5.0'))

[app         ] Ask for help or search for solutions at https://community.letsencrypt.org. See the logfile /tmp/certbot-log-ntn6mfrm/log or re-run Certbot with -v for more details.

[app         ] ERROR: Could not find a version that satisfies the requirement certbot-dns-cloudflare== (from versions: 0.14.0.dev0, 0.15.0, 0.16.0, 0.17.0, 0.18.0, 0.18.1, 0.18.2, 0.19.0, 0.20.0, 0.21.0, 0.21.1, 0.22.0, 0.22.1, 0.22.2, 0.23.0, 0.24.0, 0.25.0, 0.25.1, 0.26.0, 0.26.1, 0.27.0, 0.27.1, 0.28.0, 0.29.0, 0.29.1, 0.30.0, 0.30.1, 0.30.2, 0.31.0, 0.32.0, 0.33.0, 0.33.1, 0.34.0, 0.34.1, 0.34.2, 0.35.0, 0.35.1, 0.36.0, 0.37.0, 0.37.1, 0.37.2, 0.38.0, 0.39.0, 0.40.0, 0.40.1, 1.0.0, 1.1.0, 1.2.0, 1.3.0, 1.4.0, 1.5.0, 1.6.0, 1.7.0, 1.8.0, 1.9.0, 1.10.0, 1.10.1, 1.11.0, 1.12.0, 1.13.0, 1.14.0, 1.15.0, 1.16.0, 1.17.0, 1.18.0, 1.19.0, 1.20.0, 1.21.0, 1.22.0, 1.23.0, 1.24.0, 1.25.0, 1.26.0, 1.27.0, 1.28.0, 1.29.0, 1.30.0, 1.31.0, 1.32.0, 2.0.0, 2.1.0, 2.2.0, 2.3.0, 2.4.0, 2.5.0)

[app         ] ERROR: No matching distribution found for certbot-dns-cloudflare==
biggator commented 1 year ago

I experienced similar issues using a Cloudflare Argo tunnel. Was seeing remote error: tls: unrecognized name until I reverted to v23.03.2 from latest / v23.04.1.

That resolved the tls issue, but:

biggator commented 1 year ago

I don't know why @lovecxe closed this. I still can't login to the dashboard. @ther3zz or @lovecxe, do you have a resolution to this problem?

ther3zz commented 1 year ago

I don't know why @lovecxe closed this. I still can't login to the dashboard. @ther3zz or @lovecxe, do you have a resolution to this problem?

Still not working on the latest version. I'm assuming that an update will be pushed later and that's why this was closed but I'm not really sure.

ther3zz commented 1 year ago

@lovecxe do you know when a fix might be pushed? I'm assuming you have one since you marked this one as completed

biggator commented 1 year ago

Ok, my issue is resolved. If your environment is similar to mine, this may help. I've resumed using latest / v23.04.1, which allows me to login to the dashboard.

This issue discussion on the Nginx Proxy Manager repo suggests a solution; in particular the comments from TheBeeZee.

My environment is a Cloudflare tunnel configured with an ingress rule that uses a wildcard subdomain since NPM proxies all incoming web requests. I have several subdomains defined in the Cloudflare DNS that all route to this server.

My tunnel configuration file looked like this:

tunnel: <tunnel-id>
credentials-file: /home/nonroot/.cloudflared/<tunnel-id>.json

# forward all traffic to Reverse Proxy w/ SSL
ingress:
  - hostname: "*.mydomain.com"
    service: https://192.168.0.4:443
    originRequest:
      noTLSVerify: true
# final rule for all non-matching requests
  - service: http_status:404

Since the issue is that the tunnel isn't posting a host name for the request to the origin server (tls: unrecognized name), I followed TheBeeZee's simple suggestion to declare a default origin host name that will be used to validate the certificate for TLS. I added lines 3 and 4 below which defines a default originServerName for all requests unless overridden in the ingress rules.

tunnel: <tunnel-id>
credentials-file: /home/nonroot/.cloudflared/<tunnel-id>.json
originRequest:
  originServerName: example.mydomain.com

# forward all traffic to Reverse Proxy w/ SSL
ingress:
  - hostname: "*.mydomain.com"
    service: https://192.168.0.4:443
    originRequest:
      noTLSVerify: true
# final rule for all non-matching requests
  - service: http_status:404

You might also be add each individual hostname as a separate ingress rule (instead of a wildcard) and then specify the originServerName in each ingress rule. That would mean that you'd have to reconfigure the tunnel config file each time you add a new proxy host to Nginx Proxy Manager.

@ther3zz, I hope this helps.

ther3zz commented 1 year ago

Ok, my issue is resolved. If your environment is similar to mine, this may help. I've resumed using latest / v23.04.1, which allows me to login to the dashboard.

This issue discussion on the Nginx Proxy Manager repo suggests a solution; in particular the comments from TheBeeZee.

My environment is a Cloudflare tunnel configured with an ingress rule that uses a wildcard subdomain since NPM proxies all incoming web requests. I have several subdomains defined in the Cloudflare DNS that all route to this server.

My tunnel configuration file looked like this:

tunnel: <tunnel-id>
credentials-file: /home/nonroot/.cloudflared/<tunnel-id>.json

# forward all traffic to Reverse Proxy w/ SSL
ingress:
  - hostname: "*.mydomain.com"
    service: https://192.168.0.4:443
    originRequest:
      noTLSVerify: true
# final rule for all non-matching requests
  - service: http_status:404

Since the issue is that the tunnel isn't posting a host name for the request to the origin server (tls: unrecognized name), I followed TheBeeZee's simple suggestion to declare a default origin host name that will be used to validate the certificate for TLS. I added lines 3 and 4 below which defines a default originServerName for all requests unless overridden in the ingress rules.

tunnel: <tunnel-id>
credentials-file: /home/nonroot/.cloudflared/<tunnel-id>.json
originRequest:
  originServerName: example.mydomain.com

# forward all traffic to Reverse Proxy w/ SSL
ingress:
  - hostname: "*.mydomain.com"
    service: https://192.168.0.4:443
    originRequest:
      noTLSVerify: true
# final rule for all non-matching requests
  - service: http_status:404

You might also be add each individual hostname as a separate ingress rule (instead of a wildcard) and then specify the originServerName in each ingress rule. That would mean that you'd have to reconfigure the tunnel config file each time you add a new proxy host to Nginx Proxy Manager.

@ther3zz, I hope this helps.

@biggator Thanks for pointing me in the right direction! I'm actually not using their tunnel service and am just using their proxied DNS. I'll take a look at my whole config to ensure I'm passing originServerName throughout my whole process.

EDIT: Turns out I had to include proxy_ssl_server_name on; and proxy_ssl_name $host; in my proxy params on the proxy processing traffic just before NPM