home-assistant / addons

:heavy_plus_sign: Docker add-ons for Home Assistant
https://home-assistant.io/hassio/
Apache License 2.0
1.55k stars 1.5k forks source link

[Bug] NGINX Home Assistant SSL proxy 3.10.0 no longer shows real IPv6 #3727

Closed davidrapan closed 2 months ago

davidrapan commented 2 months ago

Describe the issue you are experiencing

After an update from 3.9.0 to 3.10.0 'Login attempt failed: Login attempt or request with invalid authentication from:' shows internal IPv4 of the proxy (172.X.X.X) instead of real IPv6.

Have to say that I have reconfigured docker hassio network to also assign private fd00:/64 prefix for even making the handing of real IPv6 to work in a first place. And also have .conf for nginx with:

listen [::]:443 ssl http2 ipv6only=on;

to listen on ipv6.

But upgrade to latest version did broke it. :-/

What type of installation are you running?

Home Assistant OS

Which operating system are you running on?

Home Assistant Operating System

Which add-on are you reporting an issue with?

NGINX Home Assistant SSL proxy

What is the version of the add-on?

3.10.0

Steps to reproduce the issue

  1. Try to login using wrong password.
  2. In logged in session look into notifications.

System Health information

System Information

version core-2024.8.1
installation_type Home Assistant OS
dev false
hassio true
docker true
user root
virtualenv false
python_version 3.12.4
os_name Linux
os_version 6.6.44-haos
arch aarch64
timezone Europe/Prague
config_dir /config
Home Assistant Community Store GitHub API | ok -- | -- GitHub Content | ok GitHub Web | ok GitHub API Calls Remaining | 5000 Installed Version | 1.34.0 Stage | running Available Repositories | 1392 Downloaded Repositories | 22 HACS Data | ok
Home Assistant Cloud logged_in | false -- | -- can_reach_cert_server | ok can_reach_cloud_auth | ok can_reach_cloud | ok
Home Assistant Supervisor host_os | Home Assistant OS 13.0 -- | -- update_channel | stable supervisor_version | supervisor-2024.08.0 agent_version | 1.6.0 docker_version | 26.1.4 disk_total | 34.7 GB disk_used | 24.5 GB healthy | true supported | true host_connectivity | true supervisor_connectivity | true ntp_synchronized | true virtualization | kvm board | generic-aarch64 supervisor_api | ok version_api | ok installed_addons | Sunsynk/Deye Inverter Add-on (multi) (0.6.5), UniFi Network Application (3.2.0), File editor (5.8.0), chrony (3.0.1), MariaDB (2.7.1), Advanced SSH & Web Terminal (18.0.0), phpMyAdmin (0.9.1), Studio Code Server (5.15.0), WireGuard (0.10.2), Let's Encrypt (5.1.0), NGINX Home Assistant SSL proxy (3.10.0), ACME.sh (1.0.0), Selenium Grid Server (1.0.1)
Dashboards dashboards | 4 -- | -- resources | 12 views | 21 mode | storage
Recorder oldest_recorder_run | August 5, 2024 at 7:26 PM -- | -- current_recorder_run | August 14, 2024 at 3:30 PM estimated_db_size | 4529.64 MiB database_engine | mysql database_version | 10.11.6
Solcast PV Forecast can_reach_server | ok -- | -- used_requests | null rooftop_site_count | 1

Anything in the Supervisor logs that might be useful for us?

No response

Anything in the add-on logs that might be useful for us?

No response

Additional information

No response

agners commented 2 months ago

And also have .conf for nginx with:

listen [::]:443 ssl http2 ipv6only=on;

Hm, so was this the only change to Nginx config? From what I can tell, the extra config at /share/nginx_proxy/*.conf should continue to work, in a quick test the template change did not broke that part at least :thinking:

/cc @miguelrjim

davidrapan commented 2 months ago

Hm, so was this the only change to Nginx config?

Yes, just this one line:

~ # cat /share/nginx_proxy_default.conf
listen [::]:443 ssl http2 ipv6only=on;

From what I can tell, the extra config at /share/nginx_proxy/*.conf should continue to work

It's content of this one:

default: nginx_proxy_default*.conf
miguelrjim commented 2 months ago

@davidrapan @agners fixed this on 3.10.1 (PR), do let me know if you're still having that problem

davidrapan commented 2 months ago

@miguelrjim, @agners, yes. Thanks! New version solves the issue! login_attemp_failed