james-stevens / handshake-bridge

Bridging Handshake & ICANN TLDs
MIT License
11 stars 2 forks source link

hsd AXFR support #1

Open buffrr opened 3 years ago

buffrr commented 3 years ago

Hey! you mentioned that AXFR could be useful in hsd. Since hsd now supports plugins i created an experimental one based on RFC 5936. I had to include a small patch to allow plugins to get rinfo.

you can try it here https://github.com/buffrr/hsd-axfr .

Example

dig @127.0.0.1 -p 5349 . axfr > root.zone.

Here's tail root.zone output

ns1.mikonos.        21600   IN  A   44.231.6.183
iamfernando_aaaaagpakdj. 21600  IN  NS  ns1.iamfernando_aaaaagpakdj.
ns1.iamfernando_aaaaagpakdj. 21600 IN   A   44.231.6.183
.           86400   IN  SOA . . 2021031912 1800 900 604800 86400
.           0   ANY SIG 0 253 0 0 20210319184009 20210319064009 10806 . eZqG4MeQYwRbzYqGMqkdrnBI2ja5MF7Dtuux2UYtgldtiM0QqpQdWgV8 vdz+GfFmVaI3SYowtlSgEKMdfxr4EQ==
;; Query time: 35432 msec
;; SERVER: 127.0.0.1#5349(127.0.0.1)
;; WHEN: Fri Mar 19 05:40:09 MST 2021
;; XFR size: 1194797 records (messages 797, bytes 31613709)

I have only tried this with dig so let me know if it misses something from the RFC (i know there are other fancy features but this only does a zone dump for now).

james-stevens commented 3 years ago

That's really cool - well done, very fast!

Pretty sure, if it works with dig it will work with anything (see below)

I will try it next week. There are obviously a number of things it also needs to be actually useful

  1. Remove the SIG0 and include the --no-sig0 cmd line flag, so a std server can poll the SOA Serial over UDP. bind will not accept any answers where SIG0 is present - so for now I either use my fork (which is getting kinda old), or apply a patch I have every time, which is a total pain.
  2. Fix the SOA Serial to be the unix-time of the last block that was included in the database creation - this way different servers will give the same SOA Serial when they are serving the same information. This means two hsd can be run for fault-tolerance, and bind will collect the zone from either. Right now, if one was failing to update you wouldn't know, becuase the SOA Serial is just the current time.
  3. Merge in the ICANN data.
  4. Sign the merged data. For AXFR, you would have to sign it "correctly" as you can't create black/white lies for every single name that doesn't exist. Of course, it can be signed externally, but as hsd has validated the PoW, it really should be the one to sign it. Having it so no data can leave hsd unless it been signed, I think is an important point of principal.

So, right now (1) I patch in manually, (2) I just put up with the currently SOA Serial, which is bad becuase it means I can't detect failure to update, and (3) & (4) I do externally with a combination of bind (maintaining a copy of the ICANN root) & a python script.

I assume you do the AXFR from the stored/dumped database, not the live data. The original dumpzone I copied used the live data, but the new dumpzone patch (I think Matt made it) uses the stored/dumped database, this is better and solves some race conditions without needing locks.

buffrr commented 3 years ago

Remove the SIG0 and include the --no-sig0 cmd line flag, so a std server can poll the SOA Serial over UDP. bind will not accept any answers where SIG0 is present - so for now I either use my fork (which is getting kinda old), or apply a patch I have every time, which is a total pain.

I think there will be a --no-sig0 option in hsd soon. Matt also mentioned that we might use a standard algorithm for SIG0 so that it works with other software (in addition to --no-sig0)

Fix the SOA Serial to be the unix-time of the last block that was included in the database creation

I believe Matt was talking about this as well. Yeah i don't see why not.

Merge in the ICANN data.

I will have to check if we have all the ICANN data available in hsd. We need to get an up to date version somehow (it seems easier to just get a copy from IANA)

Sign the merged data. For AXFR, you would have to sign it "correctly" as you can't create black/white lies for every single name that doesn't exist. Of course, it can be signed externally, but as hsd has validated the PoW, it really should be the one to sign it. Having it so no data can leave hsd unless it been signed, I think is an important point of principal.

Sure it's nice to do this in "principal" but if SIG0 supports common algorithms that should work much faster! It's also mentioned in the AXFR rfc because it appears to be a standard practice to use SIG0 or TSIG.

This is from the RFC5936

The client MAY include one transaction integrity and authentication resource record, currently a choice of TSIG [RFC2845] or SIG(0)

Hmm but i'm not sure if that's the same thing because here they mention the client.

You can sign it once you get the root zone. For now at least, if sig0 works with AXFR i think this is the most practical solution

I assume you do the AXFR from the stored/dumped database, not the live data.

I updated it to use the snapshot instead of live data just like the new PR.

IXFR is pretty neat as well with these incremental updates so will see if we can support that at some point.

james-stevens commented 3 years ago

I will have to check if we have all the ICANN data available in hsd

You can maintain a copy by AXFR/IXFR - REFRESH & RETRY in the SOA tell you the preferred update intervals, just poll the SOA Serial over UDP, then XFR if its changed.

Alternatively, you could just pull in the latest ICANN data when you do the hsd DB update. In my python, it re-merges the ROOT if either ICANN or Handshake SOA Serial have changed, but doing it only when Handshake changes would almost certainly be fine, as the ICANN data changes so rarely - so long as Handshake continues to change at about the same rate.

If the update rate on handshake ever dropped a lot, say to once a day or less, then you'd probably need to so the same as me & rmerge if either zone updates.

You also need to take account of conflicts - there are now three - my script can prioritize either Handshake or ICANN - I currently have it set to prioritise ICANN.

Mar 21 11:41:00 hsd user.notice merge_root: = Deleting duplicate xn--4dbrk0ce. in Handshake
Mar 21 11:41:00 hsd user.notice merge_root: = Deleting duplicate xn--cckwcxetd. in Handshake
Mar 21 11:41:00 hsd user.notice merge_root: = Deleting duplicate xn--jlq480n2rg. in Handshake

Would be nice to DNSSEC validate the ICANN data as it comes in too. You can get the ICANN ROOT by XFR from these IPs ... (I believe they all support IXFR)

                        192.228.79.201; # b.root-servers.net
                        192.33.4.12; # c.root-servers.net
                        192.5.5.241; # f.root-servers.net
                        192.0.47.132; # xfr.cjr.dns.icann.org
                        2001:500:84::b; # b.root-servers.net
                        2001:500:2f::f; # f.root-servers.net
                        2001:7fd::1; # k.root-servers.net
                        2620:0:2830:202::132; # xfr.cjr.dns.icann.org
                        2620:0:2d0:202::132; # xfr.lax.dns.icann.org

What I do is get bind to maintain a local copy, as it has all the AXFR/IXFR, polling code already, then just grab a copy from bind when I want one. I don't beleive in reinventing wheels that have been reinvented far too many times already.

What would be nice is if hsd is like a black box that produces the merged signed zone from data it has cryptographically validated.

If hsd can produce a signed merged ROOT zone by XFR, then all the resolver stuff can be done in a standard copy of bind - hence if I was writing the code, that's what I would have done. The only DNS it would need to answer is an SOA Poll & an AXFR - that's it.

I have a small(ish) "C" program that can do only this from RFC text zone files. It answers SOA Polls & AXFR requests & nothing else as the rest can be done in bind. Then just re-reads the SOA Serial from the file when the mod-time changes.

... but if SIG0 supports common algorithms ...

Nobody uses sig0 any more. They use DNSSEC, cos it's better & does more.

DNS Cookies also adds client-server session integrity. I'm not sure why some (or all) of the cookie) isn't some kind of actual sig, but there is this ...

   You could consider the
   Client Cookie to be a weak client signature over the Server IP
   Address that the client checks in replies, and you could extend this
   signature to cover the request ID, for example, or any other
   information that is returned unchanged in the reply.

I don't think it's unreasonable to expect hsd to work with with all existing std DNS s/w. If you think that is unreasonable, then we shall never agree. This continued insistence of retaining sig0 (& insisting on breaking compatibility with std dns s/w) is a big reason why I'm just happier to do my own thing and not get involved.

DNS underpins EVERYTHING that happens on the Internet & the ROOT zone underpins the DNS - so for me, making the ROOT zone maximally compatible is not a question. It's simply mandatory.

IXFR is pretty neat

Yes

I feel the format is a bit kludgy, but yes, functionally, it's really neat - and bind can convert an AXFR into an IXFR by finding the differences - ixfr-from-differences yes;

It's pretty much as good as dynamic updates, in both speed & bandwidth, except that, using the SOA Serial, you can catch up from any point in the past, from any master or slave of the same data sequence - another reason the SOA Serial in hsd should be correct (linked to the data being served) & not just (effectively) a random number.

james-stevens commented 3 years ago

AXFR rfc because it appears to be a standard practice to use SIG0 or TSIG

For AXFR, everybody uses TSIG - but only where they need to restrict who has permission to XFR the zone

Most ppl are quite sensitive about this, but for ROOT zone servers, its irrelevant

james-stevens commented 3 years ago

So here's the best I could come up with

hsd acts as a black-box that validates both the PoW/Handshake data & the Verisign DNSSEC data, merges the two zones into one, signs it and makes it available by AXFR, inc SOA Polling over UDP - where the handshake SOA Serial reflects the version of the handshake data, (say) using the unix timestamp from the last block that made it into the stored DB as the SOA Serial - most big zones use unix time as the SOA Serial these days.

Keys to sign the zone are generated once & stored (although can be copied from a backup/different system / removed & regened), but don't need to be accessible. Signing uses ECDSA384. Keys can be copied between hsd instances for the purposes of backup / failover / load-balancing.

"Resolver Service Providers" provide a public resolver service based on getting a ROOT zone from hsd that's been signed using keys generated & stored by their system. This is easy to containerize, so anybody could run such a service, if only on a Pi for themselves.

If they want to provide a public service, they can publish their DS into the HNS crypto-ledger somehow, maybe by buying a TLD & giving it a DS, but no NS, so ppl have secure access to their ROOT DS.

Users can then chose which RSP are going to query / trust (just as most users choose which website/app they will trust to buy & hold their crypto-currency, instead of holding it on their own PC). They can either just query as clients (like any std public resolver service), or by XFRing the ROOT zone from the provider & hard-coding the DS into their resolver - this is what my handshake-resolver does, copying the signed ROOT from me.

If The DANE Authentication Chain Extension for TLS makes it to RFC (as far as I can see it hasn't yet - but I've seen nothing since late 2020, so maybe its been abandoned in favour of something else?), then all the client has to do is ignore the ROOT & TLD DNSSEC data provided & query their own resolver for it (as they would have done without this extension), and most of the acceleration provided by this extension would still be beneficial.

All RSP should allow AXFR, so the zone data they are relying on can be checked & validated by anybody - websites showing how good they are would be trivial to construct - e.g how soon they update, how accurate their data is, query performance, etc.

This is what I constructed using the python script in this project.

Clearly it is not as good as it possibly could be (as above), but it works, is reliable and supports full RFC DNSSEC in all handshake & ICANN domains & sub-domains to any level, inc support for DNS cookies for cache requests & client requests.

It uses industry std s/w, compiled to binary with many yrs & instances of live service & reasonably fast response times.

For these reasons, to me, that makes it a technically superior solution to what is offered by / proposed for hsd's resolver.

I've offered this before, but nobody liked it more than sticking with SIG0, which ensures hsd will not interoperate with any other DNS s/w, so I gave up.

buffrr commented 3 years ago

hsd acts as a black-box that validates both the PoW/Handshake data & the Verisign DNSSEC data, merges the two zones into one, signs it and makes it available by AXFR, inc SOA Polling over UDP - where the handshake SOA Serial reflects the version of the handshake data, (say) using the unix timestamp from the last block that made it into the stored DB as the SOA Serial - most big zones use unix time as the SOA Serial these days.

If hsd can produce a signed merged ROOT zone by XFR, then all the resolver stuff can be done in a standard copy of bind - hence if I was writing the code, that's what I would have done. The only DNS it would need to answer is an SOA Poll & an AXFR - that's it.

If I was trying to scale hsd though, i would try to get the data as fast as i can from it so i would be able to do my own processing. I think the AXFR + merged zone could be done in a plugin to see how well it works. The zone is pretty small currently ~47M so it wouldn't make a big difference if the signing was done in javascript but I expect dnssec-signzone to be faster at this (and this tool is also battle-tested).

I think you could also make a plugin that does AXFR and merges the zone by running your python script. Have you tried signing zones that are 10GBs or more(I think dnssec-signzone also needs to do sorting not sure if it does it in memory)? There are very few websites on handshake right now I expect the root zone to become much larger so maybe something to keep in mind.

Clearly it is not as good as it possibly could be (as above), but it works, is reliable and supports full RFC DNSSEC in all handshake & ICANN domains & sub-domains to any level, inc support for DNS cookies for cache requests & client requests.

Yeah, that's also what i'm thinking this is still an option. I personally prefer something like this than rely on hsd directly to sign the zone and export it but i know it's more convenient. Maybe with a plugin that does the merged zone it can be even easier.

I've offered this before, but nobody liked it more than sticking with SIG0, which ensures hsd will not interoperate with any other DNS s/w, so I gave up.

It's okay to have some disagreements! SIG0 is a valid option but i don't think it is useful if I can run HSD root server with any std DNS software i want (which I can with online signing). When i'm using hsd ROOT server though, i'm not thinking of serving it at the scale of 1.1.1.1 nor anyone should. It should be reasonably light and works with existing dnssec software with moderate traffic (it can still serve thousands of devices very easily) and I can also run it on my laptop as a light client without needing to be a full node.

If the merged zone also works well enough and some people prefer it, we can also have that as an option especially plugins now open lots of possibilities. For light clients and hsd's built-in root server, I still think online signing is more suitable (again not trying to be the next 1.1.1.1 here with hsd root server). So at least for me, I don't really disagree with this.

You mentioned using PowerDNS as backend as well. I think with AXFR/IXFR it may be possible to keep an SQL backend up to date and use any of the signing options that PowerDNS provides.

I feel the format is a bit kludgy, but yes, functionally, it's really neat - and bind can convert an AXFR into an IXFR by finding the differences - ixfr-from-differences yes;

I thought IXFR would be pretty easy to implement without doing a diff but it seems like hsd database is mainly optimized to lookup names. So even the order of data in the zone dump is not preserved (i didn't dig too much into this but that's what I heard so far). It doesn't seem like there is an easy way to only get the updates after x number of blocks or to only get changes after a specific time. So ixfr-from-differences or something similar may be the only option right now :( I still want to look into this more if I ever get some free time but i'v been very busy these days

buffrr commented 3 years ago

If The DANE Authentication Chain Extension for TLS makes it to RFC (as far as I can see it hasn't yet - but I've seen nothing since late 2020, so maybe its been abandoned in favour of something else?), then all the client has to do is ignore the ROOT & TLD DNSSEC data provided & query their own resolver for it (as they would have done without this extension), and most of the acceleration provided by this extension would still be beneficial.

Well, there is no progress so far about this. Actually, it is not that great because it doesn't have denial of existence. So an attacker could remove all the dnssec chain from TLS and we wouldn't notice it (it will look like any other certificate that doesn't have this extension) so downgrade attacks to one of the CAs would still be possible. Making a TLSA lookup is better, but yeah if it gets browser support it is great for accelerating adoption

james-stevens commented 3 years ago

i would try to get the data as fast as i can

dnssec-signzone is not that quick, but speed isn't really a huge issue - the stored db is updated every 3 hrs, so you've got plenty of time. TTLs are commonly a day, dot-COM uses 2 days, so there's that as well. Not sure where your 21600 comes from, but you might want to consider increasing that - your SOA MIN is 86400, so a referral only TLD its not unreasonable to keep these the same.

To really scale it, you need to do dynamic signing - resigning only those records that need resigning, on-the-fly as the data changes come out of the crypto-ledger. You can do dynamic signing with bind, using it as a signing slave, but the point of signing in hsd is becuase hsd is the one that has validated the data, so it should be the one that signs it. It also makes implementation super simple & all the DNSSEC code hsd needs is already there.

I think dnssec-signzone also needs to do sorting not sure if it does it in memory

yes & yes - but for scaling I wouldn't use it.

To do dynamic signing in bind, if hsd provided a merged zone, you'd need three views

  1. Loads the AXFR from text file or XFR from hsd
  2. Does the ixfr-from-differences
  3. Loads from (2) and signs based on the differences (also maintains IXFR support)

It's not impossible, but my current bind is already using 10x the memory of hsd, in order to keep all the different copies in memory, this scheme adds one more copy.

SIG0 is a valid option

  1. Not if you can't switch it off
  2. Existing clients & servers don't support it
  3. I don't see what it adds if ppl who use it are ones who are running hsd on their local lan anyway. If they are behind a NAT, then externally spoofing replies to their clients would be essentially impossible and DNS Cookies would prevent that anyway.

    & PSK is simply not suitable for public services. People don't use it for a reason.

I expect the root zone to become much larger

dot-COM & dot-EU (~10M) both use NSEC3+OptOut with dynamic signing. Dot-EU actually uses bind for this. COM has a proprietary solution based on Oracle, because they have loads-of-money.

The question isn't so much how big it will get, but how many TLDs will be signed. If they want to use DANE they will need to be signed, but a lot of domains may just be held as an investment, or whatever - as is the case for dot-COM - the vast majority of their domain names are not in use.

Using NSEC3+OptOut only produces an NSEC3 record for zones that have a DS, so the number of NSEC3 & RRSIG records can be significantly reduced. BUT NSEC3 is higher load than NSEC, so if most sub-domains are signed, its not worth using.

You mentioned using PowerDNS as backend as well

If you paired with PowerDNS there are two options I would consider

  1. Use the PowerDNS rest/api and do live updates - the problem with this is how to keep PowerDNS & hsd data sync'ed especially after a restart of either. I don't know enough about hsd to answer this, but maybe something like a TXT record in the PowerDNS data set to keep track of where you are up to in the crypto-ledger. When you use the rest/api (& in general), PowerDNS manages the SOA Serial for you, so I don't think you can use this to track synchronization.
  2. PowerDNS uses plug-ins for the back-end data store. It currently has about 10 to 12, of which MySQL & RFC Text files are two I know of. If you could write hsd as a backend DB for PowerDNS that might be a neat solution, but it may expect you to be able to answer enquiries you simply can't. I see pipe & remote are present, but they may not be DB plugins, cos I know for sure geoip isn't!

So even the order of data in the zone dump is not preserved

That's my understanding - there is a comment in the code to this effect, but you can still have a secondary in-memory database that provides a sorted order & hold the DNSSEC records - they do not need to be stored as they are derived data.

IMHO implementing IXFR would be challenging, so personally I'd leave it for ppl to do externally. PowerDNS does not support it, but they have an external utility that can do it, but its been in beta for a few years, so they don't recommend you use it, but recommend using bind instead.

Making a TLSA lookup is better

Sure, but the problem this was trying to resolve is that the DNSSEC validation, all the way up to & including the ROOT DS really needs to be done in-browser to give comparable levels of security to CAs. Collecting all those DNS records will take a while, so will cause a connection delay, possibly quite long.

james-stevens commented 3 years ago

I've just been playing with the latest bind on Alpine (9.16.6) - it's a massive improvement over what has been before.

You basically give it the Key files & an updating text zone file and it can dynamically sign & do ixfr-from-differences all on the fly. It doesn't even need to XFR the zone in any more, you just need to run rndc reload . after the flat file changes.

It used to be that ixfr-from-differences only worked for slaves - i.e where the zone was XFR'd in, but not from text zone files.

I've not found any way to have control over the SOA Serial in the resulting signed zone, so if you have two signing servers it's not clear how you'd keep their SOA Serial in sync (so it makes sense to slaves who are talking to both), except by clearing away all the data & have them both start from the same fresh merged file at the same time. As both instances would be using the same algorithm, I'd hope their SOA Serials would stay in step !

Unless you feel very strongly about the PoW & having hsd sign the data, there's really no point. I know there are people who have very strong feelings about PoW.

It would still be nice if hsd could

  1. automatically dump the database every time it re-creates the database store
  2. merge in the ICANN data before writing the text file
  3. have the option to run a script - e.g. rndc reload . after

That way there wouldn't be a need for any scripts to glue it together, but it's actually no big deal to just poll for a new dumped file & do 2 & 3 in shell

buffrr commented 3 years ago

Unless you feel very strongly about the PoW & having hsd sign the data, there's really no point. I know there are people who have very strong feelings about PoW.

This is what I've been saying there is really no point for hsd to do this. It will only put load on those raspberry pis and for anyone that want something more serious they will go with bind anyway.

It would still be nice if hsd could

  1. automatically dump the database every time it re-creates the database store

It would be nice if a full database dump can be avoided but so far there is no straightforward way to do this. I'm thinking if plugins can hook into every update inserted into hsd's database, they could also keep an SQL database (or any other db file) up to date as well. In this case even IXFR becomes easier to implement.

james-stevens commented 3 years ago

It would be nice if a full database dump can be avoided

The problem with doing anything incrementally is ensuring you maintain sync - which is harder than you think. But live DNS updates would be really nice. The most logical way to support this would be with Dynamic DNS Updates as all DNS servers should support this.

IXFR solves the sync issue by having a fall-back to AXFR - if you ask for an update from a serial number that is too old, it gives an AXFR instead. You can also force a retransfer (AXFR) on the slave (using rndc) and it resets sync. Or you remove all the slaved zone files and killall -9 named depending on how moody you feel.

--

I think I've got the bind dynamic signing working now, but getting it working with NSEC3+OptOut is not easy - you can't enable it until the zone has been signed for the first time which (for the full zone) I couldn't get to complete in over 30 mins.

So I started with a nearly empty zone, let that finish being signed (a few seconds), enabled NSEC3+OptOut, then added all the other records back. That then completed reasonably quickly and seems to take updates reasonably fast.

There appears to be an nsec3param DNSSEC policy config option in the pipeline, but doesn't look like its been implemented yet. My named & named-checkconf don't like it.

Tomorrow I'll update this project to work with bind's dynamic signing - I tried to get it working today, but just caused a lot of pain on the slaves, so I need to get it completely ready on a dev system first!

james-stevens commented 3 years ago

Got the bind dynamic signing working & switched my handshake-resolver container & bridge.jrcs.net over to using it.

I run it off-site in my office data center, then upload it to the on-line data centre over VPN.

Mar 27 03:10:43 hasroot local0.info named-handshake-bridge[2594]: client @0x55be56956d08 192.168.5.180#54035 (.): transfer of './IN': IXFR ended: 1 messages, 178 records, 4263 bytes, 0.001 secs (4263000 bytes/sec)
Mar 27 04:14:24 hasroot local0.info named-handshake-bridge[2594]: client @0x55be572aa408 192.168.5.180#39281 (.): transfer of './IN': IXFR ended: 1 messages, 90 records, 2417 bytes, 0.001 secs (2417000 bytes/sec)
Mar 27 05:18:28 hasroot local0.info named-handshake-bridge[2594]: client @0x55be56e5a008 192.168.5.180#38029 (.): transfer of './IN': IXFR ended: 1 messages, 108 records, 2828 bytes, 0.001 secs (2828000 bytes/sec)
Mar 27 05:24:07 hasroot local0.info named-handshake-bridge[2594]: client @0x55be56956d08 192.168.5.180#41631 (.): transfer of './IN': IXFR ended: 1 messages, 10 records, 709 bytes, 0.001 secs (709000 bytes/sec)
Mar 27 06:22:28 hasroot local0.info named-handshake-bridge[2594]: client @0x55be567d7fa8 192.168.5.180#36681 (.): transfer of './IN': IXFR ended: 1 messages, 192 records, 4995 bytes, 0.001 secs (4995000 bytes/sec)
Mar 27 07:26:45 hasroot local0.info named-handshake-bridge[2594]: client @0x55be569546c8 192.168.5.180#49687 (.): transfer of './IN': IXFR ended: 1 messages, 134 records, 3442 bytes, 0.001 secs (3442000 bytes/sec)
Mar 27 08:30:09 hasroot local0.info named-handshake-bridge[2594]: client @0x55be57259888 192.168.5.180#54337 (.): transfer of './IN': IXFR ended: 1 messages, 98 records, 2629 bytes, 0.001 secs (2629000 bytes/sec)
Mar 27 09:33:36 hasroot local0.info named-handshake-bridge[2594]: client @0x55be572aa408 192.168.5.180#48171 (.): transfer of './IN': IXFR ended: 1 messages, 180 records, 4408 bytes, 0.001 secs (4408000 bytes/sec)
Mar 27 10:06:54 hasroot local0.info named-handshake-bridge[2594]: client @0x55be569546c8 192.168.5.180#40633 (.): transfer of './IN': IXFR ended: 1 messages, 10 records, 717 bytes, 0.001 secs (717000 bytes/sec)
Mar 27 11:11:44 hasroot local0.info named-handshake-bridge[2594]: client @0x55be572a7dc8 192.168.5.180#55393 (.): transfer of './IN': IXFR ended: 1 messages, 87 records, 2407 bytes, 0.001 secs (2407000 bytes/sec)
Mar 27 12:15:00 hasroot local0.info named-handshake-bridge[2594]: client @0x55be567d8fc8 192.168.5.180#39179 (.): transfer of './IN': IXFR ended: 1 messages, 91 records, 2517 bytes, 0.001 secs (2517000 bytes/sec)
Mar 27 13:19:32 hasroot local0.info named-handshake-bridge[2594]: client @0x55be572ab128 192.168.5.180#45979 (.): transfer of './IN': IXFR ended: 1 messages, 90 records, 2466 bytes, 0.001 secs (2466000 bytes/sec)
Mar 27 14:23:01 hasroot local0.info named-handshake-bridge[2594]: client @0x55be57259888 192.168.5.180#50149 (.): transfer of './IN': IXFR ended: 1 messages, 10 records, 717 bytes, 0.001 secs (717000 bytes/sec)
Mar 27 15:27:08 hasroot local0.info named-handshake-bridge[2594]: client @0x55be572ab128 192.168.5.180#58081 (.): transfer of './IN': IXFR ended: 1 messages, 12 records, 759 bytes, 0.001 secs (759000 bytes/sec)
james-stevens commented 3 years ago

The benefit to me of having AXFR instead of dumpzone, is that dumpzone requires that hsd & my stuff are running in the same name space.

Where as, with AXFR, hsd can be run anywhere & all I need to know is its IP Address. It would just make it a lot easier for me to make a container of just my python merger & bind signer.

Microservices are the future.

buffrr commented 3 years ago

Ah nice! i would like to give your setup a try. Yep, being able to dump the zone over network is very convenient. It's easy to just dig @ hsd and get the root zone. It'd be nice if hsd has support for AXFR without plugins. Anyways, the rinfo.patch has been merged to hsd so the AXFR plugin doesn't need any patches to work.

Doing some analysis with grep and looking at the root zone so far only 173 sites have a DS record. I'm interested in ed25519 and it seems that most resolvers now support it https://ed25519.nl (out of date list) but obviously should be avoided if compatibility with legacy software is important. Here's the algorithms used in the root zone:

It's interesting to see that 3 sites have a DS record with SHA-1 as an option. I mean the SHA-256 RFC has been a standard since 2006 and it's mandatory for validating resolvers to support it! Two of these sites also have a DS record with SHA-384 as an option so that's neat.

james-stevens commented 3 years ago

It'd be nice if hsd has support for AXFR without plugins

For me, it's only useful if it can provide the entire ROOT zone (merging in the ICANN names). I might try and make something using the light client - if you don't beat me to it!

most resolvers now support it

Yes. The question is more - who has/hasn't upgraded! For example, RedHat/CentOS v7 has only quite recently upgraded from bind 9.9 to bind 9.11 - which is actually a really unusual move for them.

There is a tool called fpdns (fingerprint DNS), which can guess the version & package of a DNS server from what it can & can't support & the style of its responses, but (when I last looked) it'd not been maintained for some considerable time - but looks like somebody has been on it quite recently.

It's interesting to see that 3 sites have a DS record with SHA-1 as an option

There's no harm in putting in SHA1, if you also have SHA256 &/or SHA384. The RFC says use the best that matches, so this provides full backwards compatibility without compromising security. I guess the only disadvantage is that handshake has a packet limit of 512.

By default, PowerDNS will give you all three, so my guess is that's probably where they come from.

Does handshake support DNS style compression in the DNS packet - might be nice to have

humbly.                 21600   IN      NS      ns1.nameserver.io.
humbly.                 21600   IN      NS      ns2.nameserver.io.
humbly.                 21600   IN      NS      ns3.nameserver.io.
humbly.                 21600   IN      NS      ns4.nameserver.io.

Would save quite a few bytes on something like this.

When DNS first came out, UDP was limited to 512 bytes, which is why the "compression" was implemented. It's also where the myth of a limit of 13 NS per domain comes from - its just the most you can fit in 512 bytes. It's why all the ICANN ROOT servers have a single letter hostname - it maximises the DNS compression.

Two of these sites also have a DS record with SHA-384

violation doesn't work. Both its NS give REFUSED. humbly is from me. Its belongs to Mike Carson and is on the registry platform I run for him. I told Mike that SHA256 was the only one that mattered, but to put all three if he could.

We also have api base c consultancy defi economy explorenow gin hacktoberfest hb hire holding influencer island js mke nom paybtc plz startup teck tni toby txt viewnow www xn--9krq6q zen - all these are ECDSA256 signed, Mike (or whoever) just hasn't got round to adding the DS yet :)

I use ECDSA256 for improved compatibility, but they're hosted on a PowerDNS backend, so the mechanics of changing algorithm is easy, although getting it right in DNSSEC is a bit of a game. I don't think there is a single DNS server that supports switching algorithm the way the RFC describes it. But I had to do it for ServiceNow - so I am a witness it can be done, if a little tricky!!

But for now general opinion is that ECDSA256 is strong enough that the only real reason to do a key rollover is if your keys have been compromised.

buffrr commented 3 years ago

For me, it's only useful if it can provide the entire ROOT zone (merging in the ICANN names).

curl -s https://www.internic.net/domain/root.zone | grep -vwE '(NSEC|RRSIG)' > root.zone && dig @127.0.0.1 -p 5349 . axfr >> root.zone

That doesn't remove duplicates though :)

I might try and make something using the light client - if you don't beat me to it!

I don't think the light client can do that because it doesn't have the whole root zone

There is a tool called fpdns (fingerprint DNS),

hmm I can see how it might be able to do that. The repo doesn't have a readme or any details, but interesting idea.

There's no harm in putting in SHA1

Increases the response size but not by much. Including SHA1 doesn't add any benefits either because if someone is using a super old dnssec resolver from ~2006 they are likely using other vulnerable software so they have bigger things to worry about (i doubt they care about DNSSEC or will ever use Handshake). I mean even COM. only has SHA-256 and they care a lot about compatibility.

We also have api base c consultancy defi economy explorenow gin hacktoberfest hb hire holding influencer island js mke nom paybtc plz startup teck tni toby txt viewnow www xn--9krq6q zen - all these are ECDSA256 signed, Mike (or whoever) just hasn't got round to adding the DS yet :)

Nice names!

Does handshake support DNS style compression in the DNS packet - might be nice to have

Domain name compression is enabled by default

It's why all the ICANN ROOT servers have a single letter hostname

They should've picked something shorter than *.root-servers.net lol but this probably compresses well.

I use ECDSA256 for improved compatibility

Yeah ed25519 still needs a bit more time. I don't mind using it especially for Handshake because it relies on DANE anyway so users need to install something (even if browsers support DANE it will be with modern dnssec)

james-stevens commented 3 years ago

That doesn't remove duplicates though

right - also needs to remove DNSKEY

The benefit of having it already merged is that it then doesn't need any glue script(s). If I need a glue-script, I may as well use the python I already have. What would be nice is to feed the merged zone directly into bind over IP using AXFR.

I don't think the light client can do that because it doesn't have the whole root zone

K - bugger.

They should've picked something shorter than *.root-servers.net lol but this probably compresses well

maybe, but I don't think it wouldn't have made enough difference to fit in a 14th NS & the "compression" means it only gets mentioned once - so no gain - plus, the 512 byte limit is history now anyway - although some buggy routers don't like fragmented UDP over IPv6, so keeping it below ~1250 can have benefits & some early DSL routers didn't support UDP more than 512 bytes.

Nice names!

Thanks - not all ours - Between myself & Mike we're offering registry as a service. We have a few registrars inc 101domains & a fully RFC standard EPP/XML interface - if you're interested ask Mike (he's usually in the telegram group, I think), or Namebase should be opening up an automated sign-on service soon.

In the last month registrations have really started to increase. Pretty sweet way to make a passive income.

buffrr commented 3 years ago

The benefit of having it already merged is that it then doesn't need any glue script(s). If I need a glue-script, I may as well use the python I already have.

I see i might get a chance to do this in the plugin at some point.

It could use AXFR to get the ICANN zone from one of those addresses you mentioned (should be configurable) and maybe verify the result with ICANN's KSK.

192.228.79.201; # b.root-servers.net 192.33.4.12; # c.root-servers.net 192.5.5.241; # f.root-servers.net 192.0.47.132; # xfr.cjr.dns.icann.org 2001:500:84::b; # b.root-servers.net 2001:500:2f::f; # f.root-servers.net 2001:7fd::1; # k.root-servers.net 2620:0:2830:202::132; # xfr.cjr.dns.icann.org 2620:0:2d0:202::132; # xfr.lax.dns.icann.org

probably easier if it gets the zone from https://www.internic.net/domain/root.zone since it's kept up to date and over HTTPS (kinda consistent with the threat model of ICANN sites) but verifying the result with ICANN's KSK is a much nicer solution.

This is unlikely to be anytime soon but i will get to it eventually.

james-stevens commented 3 years ago

What I do is slave it from those IPs to a local instance of bind, the script will poll the SOA & AXFR from the local copy.

bind will then take care of any failures & do IXFR, so I know I can always get a fast local copy with no concerns over timeouts, temp failures etc

A slave does not validate, so I do that after getting it

buffrr commented 3 years ago

Alright, so I had some time to update the plugin. It now loads the ICANN root zone via AXFR, verifies dnssec with ICANN's KSK (with some additional checks like ensuring NSEC chain is complete, types in bitmap exist, etc.), and merges the zone.

This process is pretty fast, so it is currently being done on the fly, it grabs a fresh copy and merges the result with handshake rrs as you make an AXFR request. But in case ICANN zone changes and Handshake doesn't, i probably need to cache a copy of ICANN's zone in memory to make sure answers are consistent for the serial number given by hsd.

It will retry requests in case of errors, and you can specify multiple AXFR servers it will try the next if one fails. In case of name collisions, the plugin prefers handshake names since that's the default in hsd, but you can use --axfr-prefer-icann.

Pretty cool i tried it with Knot DNS it was able to load the zone from HSD (merged with ICANN names) and serve it

Note: this is still experimental i need to add some tests ... etc. but so far, it seems work well (I tried it with Knot DNS and Dig).

buffrr commented 3 years ago

I would like to work on Dynamic DNS updates (may see if it's possible as a plugin first) or IXFR/NOTIFY since those would make it work out of the box. But both IXFR/ Dynamic updates require a way to get notified of changes to the urkel tree. I also thought of avoiding the tree entirely and just parse the blocks and see what changed but that requires handling re-organizations ... etc. It's probably easier to observe changes as they are being committed to the tree instead.

I'm finally done with exams :) so I should have some free time. I need to work on letsdane and some other projects but I will look into this as well