NLnetLabs / unbound

Unbound is a validating, recursive, and caching DNS resolver.
https://nlnetlabs.nl/unbound
BSD 3-Clause "New" or "Revised" License
3.15k stars 360 forks source link

RPZ download failures do not generate errors in the log #1153

Open rptb1 opened 1 month ago

rptb1 commented 1 month ago

Describe the bug RPZ download from URLs at github.io silently fail. Nothing is logged. Please note that the bug that I am reporting is that nothing is logged. (I'd also like to know how to fix the problem, but that is secondary.)

To reproduce Steps to reproduce the behavior:

  1. Create an RPZ section that attempts to download from e.g. https://scripttiger.github.io/alts/rpz/blacklist.txt
  2. Launch unbound with logging, e.g. unbound -d -d
  3. Try to find any diagnostic information about the failed download.

Expected behavior If something fails, there should at least be a message in the log that it has failed. Even better, the message should explain why it has failed. Even better than that, it could lead to information about how to resolve the problem.

I can get some clues by raising the verbosity very high (see below) but the bug I am reporting is that I think there should be an error raised at the default level, so that people can debug their configs.

I suggest that a message is logged at the default logging level with some clue about why the config failed, such as "Unable to download RPZ zone foo-bar: connection timeout".

System:

Configure line: --build=x86_64-linux-gnu --prefix=/usr --includedir=${prefix}/include --mandir=${prefix}/share/man --infodir=${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --disable-option-checking --disable-silent-rules --libdir=${prefix}/lib/x86_64-linux-gnu --libexecdir=${prefix}/lib/x86_64-linux-gnu --disable-maintainer-mode --disable-dependency-tracking --disable-rpath --with-pidfile=/run/unbound.pid --with-rootkey-file=/var/lib/unbound/root.key --with-libevent --with-libnghttp2 --with-pythonmodule --enable-subnet --enable-dnstap --enable-systemd --with-chroot-dir= --with-dnstap-socket-path=/run/dnstap.sock --libdir=/usr/lib Linked libs: libevent 2.1.12-stable (it uses epoll), OpenSSL 3.0.2 15 Mar 2022 Linked modules: dns64 python subnetcache respip validator iterator

BSD licensed, see LICENSE in source package for details. Report bugs to unbound-bugs@nlnetlabs.nl or https://github.com/NLnetLabs/unbound/issues


**Additional information**
Here are some details about the underlying problem, but please remember that I'm reporting the lack of logging.

With `unbound -d -d -v -v -v -v` I get messages like this repeatedly:

[1728810274] unbound[129500:0] debug: auth zone hagezi-light. transfer next HTTP fetch from 185.199.109.133 started [1728810274] unbound[129500:0] debug: comm point listen_for_rw 14 0 [1728810274] unbound[129500:0] debug: SSL connection ip4 185.199.109.133 port 443 (len 16) [1728810274] unbound[129500:0] debug: comm point listen_for_rw 14 1 [1728810279] unbound[129500:0] debug: xfr stopped, connection timeout to scripttiger.github.io [1728810279] unbound[129500:0] debug: comm_point_close of 15: event_del [1728810279] unbound[129500:0] debug: close fd 15


Is the timeout too short?

But note that running `curl -v https://scripttiger.github.io/alts/rpz/blacklist.txt > /dev/null` completes without error or any hint of a problem in its output.

Also, I don't think this is a temporary failure as I've had the RPZ configured for a couple of months and it's never downloaded.

I have these two clauses in my config:

rpz: name: scripttiger-unified url: https://scripttiger.github.io/alts/rpz/blacklist.txt zonefile: /var/lib/unbound/rpz-scripttiger-unified.txt rpz-log: yes rpz-log-name: rpz-scripttiger-unified

rpz: name: urlhaus-abuse url: https://urlhaus.abuse.ch/downloads/rpz zonefile: /var/lib/unbound/rpz-urlhaus-abuse.txt rpz-log: yes rpz-log-name: rpz-urlhaus-abuse



The urlhaus-abuse downloads without problems, but the scripttiger-unified never downloads.  It seems to be a problem with github.io in particular, as no other lists download from there.

Thanks!