dweb-camp-2019 / meshnet

Mesh @ DWeb Camp 2019 📡
https://dweb-camp-2019.github.io/meshnet/
35 stars 9 forks source link

How to DNS? #12

Closed benhylau closed 4 years ago

benhylau commented 5 years ago

I would be excited if people try out radical DNS solutions like dnssb, but perhaps we need some centralized DNS on site? I don't know... what do people think?

RangerMauve commented 5 years ago

P2P apps usually make use of some sort of DHT for service discovery, paired with unique identifiers.

For Dat, you generate a public key and can use that to open your archive as a website in Beaker.

Similarly, IPFS uses public keys for IPNS lookups, and hashes for general file lookup.

What sort of services do you envision would need DNS?

amark commented 5 years ago

@RangerMauve I'm pretty sure both IPFS & DAT need DNS to resolve (even for their DHT), I think it is only SSB & GUN that can do local discovery.

Can anybody correct me if I'm wrong? I remember this specifically being an issue with the Russian "switching off the internet" issue, that it would still affect both IPFS & DAT (and obviously all blockchain based systems, like Bitcoin, Ethereum, etc.).

RangerMauve commented 5 years ago

Dat needs DNS for the DHT and the DNS trackers, but it can work without it on a local network by using MDNS for peer discovery. I'm not sure if IPFS works without the internet, though.

If MDNS isn't viable on the mesh network it'll be more of a challenge. 😅

How does gun do peer discovery on the local network BTW?

benhylau commented 5 years ago

IPFS works without Internet, we have run it on an offline LAN. IPNS needs NTP to work otherwise it doesn't know to update and sync properly. On an offline LAN, you also need to manually add bootstrap peers, since the default ones are central and no longer reachable without Internet.

For blockchain apps, I imagine they could start a new toy chain at Camp and have apps that talk to that? The blockchain has this dependency on Internet and it shows once data become local, doesn't it :)

amark commented 5 years ago

Okay good, thank you for the correction. I was fed fake news on HN then!!!

@RangerMauve community had UDP multicast working. Somebody else was playing with Bluetooth discovery, IDK if they got it working or not.

gobengo commented 5 years ago

"Centralized DNS" would probably be really helpful to to make sure as many people as possible can try things on the mesh here.

Like if bump into someone at that kombucha tent or whatever, and I'm telling them about some service I'm running, I want to be able to tell them quickly how to access that service from their phone (e.g. kombucha.dwebcamp). We should be able to communicate service addresses via word of mouth.

benhylau commented 5 years ago

I was initially thinking to have an IP everyone knows when they come (e.g. 10.0.0.1), and we can edit the page with a static website to list services with description, and manually associate IP addresses, as an alternative to real DNS. It can also be in addition to.

@gobengo I wonder if People's Open would be interested in running a DNS server?

gobengo commented 5 years ago

@benhylau @eenblam @paidforby might be capable of that. I'm out of town until about July 10 and also just don't have a lot of network operation experience to know how to make that happen.

If there were a DNS server operating onsite, is there a way we can make it easy to use automagically for non-mesh-nerds who just use their phones to connect to an SSID we tell them about? Like ideally we just tell everyone to connect to 'dweb-camp-mesh' SSID and then they can use their web browsers to type DNS names to services. I'm hoping they don' thave to manually enter DNS servers, certs, 'network profiles', etc. Is that possible?

How does a WiFi router advertise the DNS server everyone should use?

amark commented 5 years ago

yeah, I think the Captive Portal or Directory Listing page is best idea. DNS seems unclean, a p2p system should not be dependent upon DNS.

paidforby commented 5 years ago

ASAIK, there is still no reliable, well supported, decentralized DNS service. MDNS might be an option depending on the mesh routing protocol being used (IIRC, babel over IPv4 doesn't play nice with MDNS).

No matter what, if you want to resolve IP addresses to domain names, the simplest solution is to run a DNS server somewhere the network. Since this should be a mesh network, anyone can start a DNS server on their node. Then that person would just need to tell others the mesh IP of their DNS server. That IP could be entered into either clients or routers as a preferred DNS. Even on the on "real" internet, you, or more often your router, need to know the IP address of a DNS server, sometimes this is provided by your ISP, but any decent network operator (pun intended) knows a few off hand e.g. 8.8.8.8 (Google DNS), 4.2.2.1 (Level 3 DNS), or 208.67.220.220 (OpenDNS). To answer @gobengo 's question, typically your wifi router has a preferred DNS that it uses to route client traffic to the correct IP address based on a domain name.

I realize this is dweb camp and people want to decentralize all the internets, but a DNS server on a decentralized network isn't a bad thing. Given a mesh structure, everyone has equal opportunity to start their own server and peers can choose the server that they trust the most. The issue on the "real" internet is that the structure/scale of the network creates gatekeepers and prevents entry of competitors/collaborators that could shift the vision of the internet as this singular monolith that can be "turned off." It would be interesting to see what happens if a few DNS servers popped up at dweb camp. Maybe people will come with ideas of how to mesh the DNS servers. Or build ones that are open to be edit by anyone through an SSB-esque logstream. I'm just throwing ideas out there.

I personally won't be in attendance to run a DNS server, but am happy to provide advice. Not sure what peoplesopen people are up to these days. But some of y'all may want to take a look at BIND, this looks like a good guide, https://www.ostechnix.com/install-and-configure-dns-server-ubuntu-16-04-lts/

I advise against a captive portal, they are a time sink and have a negative connotation. A directory listing would be cool, since a DNS server is an auto-magic directory listing. The listing could be hosted on the same node as the DNS server, so if you know the directory listing IP, you know the DNS server IP.

mitra42 commented 5 years ago

So lets use this as an example of what is wrong with Decentralized networking as we practice it. Suggesting that users should have to enter the DNS by hand into their system is just one example of the kind of practice that minimizes the use of Decentralized / Mesh etc networks. Of course it should be in the router's and served up by the DHCP.

Yes - by all means look at different ways to mesh the DNS servers and so on, that's great experimentation, but at the end of the experimentation, the user - who just wants to use the network, or focus their attention on something other than DNS - should have a working DNS provided by DHCP.

benhylau commented 5 years ago

It looks like we are converging to something like the following.

I wonder what the publish flow would be for local services to get onto a local DNS server:

  1. Publish to global DNS and let it sync (assuming local DNS server is syncing)
  2. The more fun way is to have local DNS servers serve just local domains that we can just make up and map as we like: yellow.tent -> 10.10.12.34 🤔 can this publish flow be done easily with standard DNS software?

We are 90% sure that we'll have the Farm wired up to 100+ Mbps Internet by end of June, so @jonah-archive and I have discussed the idea of unplugging the backhaul for certain hours of each day. So we can advertise a default DNS server via DHCP as @mitra42 suggested, that is on the Internet (e.g. 8.8.8.8 (Google DNS), 4.2.2.1 (Level 3 DNS), or 208.67.220.220 (OpenDNS)) as @paidforby suggested.

When we unplug the Internet backhaul, all the people on the default DNS would no longer be able to reach Internet services (of course), but even if someone published their local IP via method 1 they won't resolve properly, until people manually switch to a local DNS provider.

I think having a scheduled Internet outage is a fun experiment, and structuring our DNS this way allows both for convenience and our dependency on centralized DNS on the Internet.

Thoughts? 👍 👎

amark commented 5 years ago

@benhylau hey, how can I add GUN to that list? Was there some form/submission that I didn't know about? Default port is 8765.

benhylau commented 5 years ago

Oh it's just an example, but I have added Matrix and GUN live schedule to the example. Let me know if this looks ok. I would also recommend everyone to reverse proxy the web interface to port 80 so people don't need to enter a specific port number when they browse to the website. Here is an example with nginx.

In the actual one, people will arrive at Camp, plug in their devices then they know their IP address, and add that to a "pad" or some static website at the Hacking Room that gets hosted on 10.0.0.1:80.

paidforby commented 5 years ago

@benhylau :+1: Good summary. I'm definitely in favor of an internet outage. Just in general. :stuck_out_tongue_closed_eyes:

When we unplug the Internet backhaul, all the people on the default DNS would no longer be able to reach Internet services (of course), but even if someone published their local IP via method 1 they won't resolve properly, until people manually switch to a local DNS provider.

I can think of two ways of solving this "turning of the internet" scenario. (1) Set "real" internet DNS (e.g. 8.8.8.8) as your primary DNS and set a local DNS (e.g. 10.44.200.2) as your secondary. Most routers/clients support a primary and secondary DNS server, so if they can't find a route to the primary, they fall back to the secondary. (2) Set up DNS caching or forwarding on a local DNS server. That way routers/clients always can have that local DNS set as their primary and when they ask for a domain on the internet, they're request is forwarded by that DNS server.

With either solution, when they internet is turned off, peers will no longer able to resolve outside domains, but will be able to resolve the domains listed in a local DNS server's records. Both solutions should require no manual modifications by peers.

I wonder what the publish flow would be for local services to get onto a local DNS server:

I'm curious about this also. Not sure about existing solutions for updating/syncing DNS records. However, if you are using BIND, you just need to update the forward and reverse zone config files with a single line in each, for example if you want yellowtent.dweb.camp to route to 10.10.12.34. In the forward zone,

yellowtent  IN  A  10.10.12.34

in the reverse zone,

10.10.12.34  IN  PTR  yellowtent.dweb.camp.

It could be fairly easy to automate updating these configs and provide a webpage of some sort for requesting/adding records. Alternatively, you could perform social automation, i.e. teach a few people how to update it, give them SSH access to the DNS server, then peers who want a routeable domain just have to physically (or virtually) ask one of those folks update it.

darkdrgn2k commented 5 years ago

I don't know if this will be helpful but here is the short script I use to update TOMESH hyperborea ip address entries in my BIND9 dns server from a git lab json file.

I would strongly recommend not editing the db files directly while bind9 is running, your really better of using nsupdate.

Its written as a console PHP code but can be easy ported to others

the basic idea is to create a file

server 127.0.0.1
zone meshwithme.online
update add h.E-Mesh-85.meshwithme.online 15 AAAA fcc0:a4af:4174:4bc6:478:2087:ec63:85
send
quit

then run it through nsupdate -y 'meshwithme.online:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==' -v /tmp/dns.log

Some extra details (ie how to create the public/private keys,configure bind9 etc) are available here

My cron job

<?php
$res= file_get_contents('https://raw.githubusercontent.com/tomeshnet/node-list/master/nodeList.json');
$allnodes[]=json_decode($res);
$res=file_get_contents('https://raw.githubusercontent.com/darkdrgn2k/node-list/master/nodeList-nomap.json');
$allnodes[]=json_decode($res);
$res= file_get_contents('https://raw.githubusercontent.com/tomeshnet/node-list/master/nodeList-nomap.json');
$allnodes[]=json_decode($res);
$r="";

foreach ($allnodes as $nodes)
    foreach ($nodes as $node)
        if (isset($node->IPV6Address) && isset($node->name)) {
            $name=$node->name;
            $name = preg_replace("/[^a-zA-Z0-9\-_]+/", "", $name);
            add($name,$node->IPV6Address);
        }

ob_start();
passthru("cat /etc/bind/zones/meshwithme.online.db | grep -F \"h.\" | grep -v ssb | awk '{print \"update delete \"$1\".meshwithme.online\"}' ");
$del = ob_get_clean();

$cmd="server 127.0.0.1\nzone meshwithme.online\n";
$cmd.=$del;

$cmd.=$addhosts;
$cmd.="send\nquit\n";
file_put_contents("/tmp/dns.log",$cmd);
$x=exec("nsupdate -y 'meshwithme.online:xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx==' -v /tmp/dns.log");

function add($hostname,$ip) {
    global $addhosts;
    $hostname="h.$hostname.meshwithme.online";
    $addhosts .= "update add " . $hostname . " 15 AAAA $ip\n";
     //    $cmd="server 127.0.0.1\nzone meshwithme.online\nupdate delete " . $hostname . "\nupdate add " . $hostname . " 15 AAAA $ip\nsend\nquit\n";
}
darkdrgn2k commented 5 years ago

Possible alternative - i never played with this myself but some people in tomesh have been poking at PeerDNS as a distributed dns system

https://github.com/tomeshnet/prototype-cjdns-pi/tree/peerdns/scripts/PeerDNS

benhylau commented 4 years ago

Maybe next year :)