GUI / nginx-upstream-dynamic-servers

An nginx module to resolve domain names inside upstreams and keep them up to date.
MIT License
311 stars 74 forks source link

Can't disable IPv6 (AAAA record) #12

Open gfrankliu opened 8 years ago

gfrankliu commented 8 years ago

I set the ipv6=off to the resolver as documented here: http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver but it seems you are still getting both A (IPv4) and AAAA (IPv6) records from DNS resolver.

gfrankliu commented 8 years ago

I compiled with --with-ipv6

./configure --add-module=nginx-upstream-dynamic-servers --with-ipv6

and here is part of the config

    resolver 8.8.8.8 ipv6=off;
    upstream pool1 {
      server www.yahoo.com resolve weight=10;
    }

I started tcpdump watching port 53 before starting nginx. I can see that nginx will first use local /etc/resolv.conf name servers to query A of www.yahoo.com and AAAA of www.yahoo.com. I assume both IPv4/IPv6 results will be stored in nginx for upstream access. Then, nginx actually started using 8.8.8.8 (as defined in "resolver") for further dns queries every time TTL expires. For all those queries, ipv6=off seems to be honored and nginx only queries A of www.yahoo.com. I am not sure if the initial AAAA is still stored and used by nginx. Configure this module only use the "resolver", but not the /etc/resolv.conf at the startup?

wandenberg commented 8 years ago

@gfrankliu are you sure that it still using the ipv6 record after you set it to off? Testing here when I set ipv6 to off it only use the ipv4 entries

2016/03/22 00:22:51 [debug] 6394#0: upstream-dynamic-servers: DNS changes for 'www.yahoo.com' detected - reinitializing upstream configuration
2016/03/22 00:22:51 [debug] 6394#0: upstream-dynamic-servers: 'www.yahoo.com' was resolved to '98.139.183.24:80'
2016/03/22 00:22:51 [debug] 6394#0: upstream-dynamic-servers: 'www.yahoo.com' was resolved to '98.139.180.149:80'

when I changed it to on

2016/03/22 00:23:21 [debug] 6489#0: upstream-dynamic-servers: DNS changes for 'www.yahoo.com' detected - reinitializing upstream configuration
2016/03/22 00:23:21 [debug] 6489#0: upstream-dynamic-servers: 'www.yahoo.com' was resolved to '98.139.183.24:80'
2016/03/22 00:23:21 [debug] 6489#0: upstream-dynamic-servers: 'www.yahoo.com' was resolved to '98.139.180.149:80'
2016/03/22 00:23:21 [debug] 6489#0: upstream-dynamic-servers: 'www.yahoo.com' was resolved to '[2001:4998:58:c02::a9]:80'

Please test it again without using any other module, including nginx_upstream_check_module. May be it is another module that cached the "wrong", not necessarily nginx-upstream-dynamic-servers

gfrankliu commented 8 years ago

See my ./configure command above, I didn't enable any other modules. In your test, can you start tcpdump in one window: sudo tcpdump -A -i any port 53 and then start nginx in another window. Watch the tcpdump window and you will see the very first few DNS requests go to the dns server as defined in /etc/resolv.conf, NOT as defined by "resolver". You will see the query is to ask both A and AAAA records of www.yahoo.com.

Later DNS queries seem to function properly as you discovered (using name servers from "resolver" and only queries A record).

gfrankliu commented 8 years ago

If I have a special DNS server for nginx which I define using "resolver", the normal name servers from /etc/resolv.conf may not even be able to resolve the upstream name. What will nginx do? Is there a way we can stop nginx from using /etc/resolv.conf for this module?

wandenberg commented 8 years ago

Hi @gfrankliu If I not misunderstood your question, the first DNS query will fail, and set the server as down. As soon as the query is done to "resolver"server instead of /etc/resolv.conf servers it will set the server as "up", considering that the "resolver" returned a valid IP. Is a big issue to have this very first query using the AAAA record? To avoid this, we will have to redo some Nginx core functions or set the server as down until the first DNS query using the resolver returns some value.

gfrankliu commented 8 years ago

During my test, I saw once the first DNS query got the AAAA record and added to upstream pool, along with the A record, it never got removed by subsequent DNS queries . I guess since the later DNS queries only got A records so only they got updated. AAAA address was left alone.

wandenberg commented 8 years ago

@gfrankliu as far as I can see the workflow is

To avoid use the first AAAA entry we must do custom DNS query functions or to set the server as down until de "resolver" get an answer for the server.

gfrankliu commented 8 years ago

The first DNS happened probably because we used "server" directive that triggered nginx default dns lookup behavior. Maybe we should use a new directive like what jdomain does.

In my test, I saw the second DNS query only updated the IPv4 addresses of the upstream configuration but left IPv6 alone, maybe because the second query didn't get any IPv6 addresses since we disabled IPv6.

wandenberg commented 8 years ago

@gfrankliu we don't use the default "server" directive, the name is the same but the implementation not. The first AAAA query is caused by the ngx_parse_url function call. The second DNS query does not "update" the entries, it replace them. Because of that I asked if that first query is a big issue, since the ipv6 it got will not be used in fact.

gfrankliu commented 8 years ago

See my last comment in https://github.com/GUI/nginx-upstream-dynamic-servers/issues/13 , it seems the AAAA response from the first dns query got stuck in the upstream list even though the second dns didn't get AAAA record. This causes nginx worker crashes even after the second dns query.

wandenberg commented 8 years ago

@gfrankliu please, do not mix the issues. The actual crash on #13 is a problem with another 3rd party module (see my comment). Are you having any issue if you compile nginx only with this module? The ipv6 from the first query is receiving requests after the module update the servers list?

Vladislavik commented 2 years ago

problem still exist, how to fix it? ipv6=off not helped, nginx reequest ip6 address