Closed adamierymenko closed 5 years ago
I would suggest Websocket over HTTPS to communicate with ROOT servers. Guaranteed stable from real-life experience when internal proxies and IDS appliances are added into the mix.
Slack uses Websockets (with fallback to long-poling COMET HTTPS similar to XMPP originally used by WhatsApp). Facebook Chat & WhatsApp now uses MQTT which is more bandwidth and battery friendly.
TCP perhaps, but we're never going to drag in all of SSL+HTTPS+WebSockets. That would drag in another 1-3mb of code (larger than ZeroTier core itself) and make porting to small devices impossible (unless it were completely optional).
There's nothing special about HTTP or web sockets -- it's just TCP as far as edge devices are concerned, especially if it's wrapped in SSL. In the past we've experimented with TCP encapsulation that mimics the appearance of SSL effectively and found that this was effective in a wide variety of scenarios, so this might be the route that we take if we re-introduce a TCP mode into the core.
I do think we might do that, since as we work down the long tail of badly behaved edge devices and restrictive networks we do encounter cases where UDP is not reliable. If we do it will be for root servers and also possibly for regular nodes that are not behind NAT or that have uPnP/NAT-PMP and can therefore map TCP ports.
The only scenario that presents a problem for pseudo-SSL is networks that issue faux root certificates and MITM all SSL traffic via things like BlueCoat. But those are (a) rare, and (b) likely to be private high-security networks where you'd get fired for using something like ZT without permission anyway. If someone on a network like that wants to use this, they're going to have to get permission from the IT department. In that case the IT department could allow UDP/9993.
For all other networks besides BlueCoat-ed ones, edge devices just see TCP. In that case TCP over 443 will be indistinguishable from HTTPS over 443 and will be handled the same way.
Where I live, many places actually block port 80 and 443 and impose http proxies via autodiscovery. Zerotier should be able to use http proxies, but if code size is a problem, use MQTT which is meant for embedded device's and microprocessors with a very limited resources. If raw TCP is so great, the world's biggest companies would use it. In reality, they use MQTT when TCP is desirable, for better reliability.
MQTT doesn't use HTTP (though like any TCP protocol it can run over web sockets), and doesn't add anything to TCP's reliability. It's used a lot because it has queuing and message distribution features that are useful in IoT-type applications like smart thermostats, etc., but ZeroTier doesn't need anything MQTT offers. For ZT it would be no different from plain TCP.
Also since MQTT runs over plain TCP, if plain TCP is not allowed then it won't work.
The more I think about it, the true lowest common denominator fallback -- and I mean for the oldest or most restrictive environments -- would be simple old fashioned http with proxy discovery. No web sockets, no extensions, nothing, just what's worked since the late 1990s. That will work on even networks with ancient things that haven't been updated in 10 years. (These exist.)
Right now my thought is to allow devices to have http paths but to keep the actual implementation of http out of the ZeroTier core (the stuff in node/) for the same reason that other network I/O is kept out of the core. Http support would also be optional, and could be left out if desired. If it were left out it would just not work and regular UDP would be required.
Then use whatever standard http facilities are supported by the OS to implement it in OneService, etc., and use it for actual data transport to/from roots alongside UDP. That way if UDP is unreliable or fails, roots will still always be able to reach endpoints.
Proxies could intercept and it wouldn't matter. They'd just see funky POSTs and GETs with encrypted binary data of type application/octet-stream.
Where do you live BTW? Here we generally only see setups like what you describe in corporate IT environments, and in those you'd need IT permission to run ZeroTier at all (and probably to install it).
We do want to engineer this to work everywhere as much as we can, but we're not going to add a lot of complexity or overhead. There's a cost/benefit analysis that we always think about.
Retitled appropriately. Will probably get to this relatively soon since it's not that hard.
@adamierymenko
This is useful for the zt use case at klouds, it will make our networks more robust. Also, I just want to say: whole project is a clear example of network heroism. Thanks!
I live in Indonesia. A country with an oppressive censorship that blocks Netflix, Reddit, Vimeo etc...
+1
Reddit, eh?
Weeps for the internet
Seriously this has got to stop but how to show how destructive it is to the govts doing it? Feel for you. Left China because GFW....
On Jul 28, 2016 7:33 PM, "Azul" notifications@github.com wrote:
+1
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/zerotier/ZeroTierOne/issues/296#issuecomment-235881169, or mute the thread https://github.com/notifications/unsubscribe-auth/AGz6iSUCQt7r5HdSz9dPro-qMdsFIijuks5qaKGqgaJpZM4HPYhw .
Is this still missing from ZT? I could use it as well. I have a pair of networks with very unreliable UDP. But TCP is pretty reliable, so I don't even need to use a relay. I just need ZT using TCP for the tunnel instead fo UDP.
Also bumping for this feature. Failover to a slower but more likely to succeed protocol at least allows for remote diagnostics. Right now I just get "online"/"offline" loops.
We are also very interested in this feature. And BTW, if someone could explain us how to fallback on TCP/443 on our proper moons, that would be very nice because official ZT fallback TCP root servers are just too slow. Problem is that I couldn't find any pointer in the documentation and we are unable to make our moons to listen on TCP/443.
Also expecting this feature. In my network environments, UDPs are unstable which leads to ping losses and transmission losses but TCPs are very robust. And also I have moon machines. Configurable direct TCP connections to moons would be a useful tool to increase zerotier's robustness.
ZeroTierOne is the best network tool I have ever seen, thanks for your amazing work.
Some reference material for implementing low-latency TCP for scenarios where outbound UDP is blocked: http://ithare.com/almost-zero-additional-latency-udp-over-tcp/
Something i would very much appreciate to be able to replace OpenVPN is:
I'm stuck with OpenVPN because i can make it listen to port T443 which generally isn't blocked unless there's a paywall you wanna go through.
Basically to make it viable it'd be awesome if we were able to use a "centralised" configuration when P2P isn't an option, like everyone being able to proxy through a "ZeroTier server" that's listening on U3 and T443 😄
I just wanted to chime in to say that, in my experience, to say that you "only see [very restrictive] setups like what you describe in corporate IT environments" is not true. At least, not anymore -- it may have been more true when it was written, but the technology to do very intrusive 'deep packet inspection' continues to drop in cost, and will eventually make it out to every SOHO and small-biz crapass router in the world. And where a checkbox exists, someone will check it, because if the option's there, it's gotta be more secure, right? Just because a network is doing something doesn't mean that behavior is well-engineered or even intentional, to say nothing about good practice that should be respected.
Recently I have seen more airports, coffee shops, etc. where ZeroTier just doesn't work. Either they are blocking UDP completely, and something about the TCP fallback isn't working, or they are somehow detecting that the TCP/443 fallback doesn't "look" like a regular HTTPS stream and blocking it too. It's getting to the point where the only places ZT does work are very "clean" home internet connections. Everywhere else it seems to fail more often than it succeeds, and I think this is largely because routers are becoming ridiculously restrictive in the name of faux security.
IMO this is not network behavior that we should respect. It should be routed around like the damage that it is. Most modern web apps (FB Messenger, WhatsApp) have extremely aggressive HTTP-esque fallbacks for this reason: users aren't going to negotiate with the sysop of the shitty coffeeshop WiFi, they're just going to assume the app is broken if it doesn't work when "the Internet" is accessible. So modern networked apps need to work on any connection where you can pull up a web page on HTTPS (at least, while running on TCP/443 without any squirrelly SSL-stripping stuff; I can see the argument for some apps refusing to run in an SSL-stripped / MITMed environment for ecosystem security reasons).
I don't like TCP over UDP for a variety of technical reasons -- UDP is unquestionably the better transport from an engineering perspective, if you're whiteboarding it out. But in the real world, TCP is the more robust transport when you need to sledgehammer your way through crummy network equipment.
I'm guessing mean you're not a fan of tcp over tcp.
It should be noted though that tcp over tcp isn't that bad on good connections either.
What we should be asking is for ZeroTier fallback on port 443 to be encapsulated over HTTP/2 or TLS. Yes it is double encryption, but that’s the best way to bypass firewalls with packet level inspection capabilities.
Yeah, "TCP over TCP" feels sorta wrong, but I'll happily take "feels wrong" over "doesn't work" any day. I'd just like to be able to use ZT in more places, so whatever gets the packets through!
Dupe and yes it's on the table/backlog
Some NATs have maximum mapping quotas. What this means is that once you use a certain number of UDP source:dest mappings, they start purging old ones either in FIFO order or sometimes more or less in random order.
I've found two of these in my local "environment", both at coffee shops, and I'll try to find out what they are. They also tend to be either symmetric or port triggered NATs. Full cone NATs would have no need to do this since they
Sometimes this is a "feature", but more often it's due to tiny cheapo routers with very little state memory. Why you wouldn't just do simple full cone on an itty bitty box with no RAM is beyond me, but someone probably told someone it wasn't secure. Security cargo cultism strikes again. :facepunch:
I've found three behaviors in the field: FIFO, LIFO, and random. LIFO is the least problematic since it basically means you can't make new direct links but existing ones keep working. FIFO and random play havoc by causing old links including to the roots and relays to be invalidated.
The stupid solution: ping the roots more often. Yuck.
I'm starting to wonder if this needs a more robust solution: bring TCP back into the core.
Right now the core don't do TCP. TCP fallback is handled by the code in service/ and goes via that little TCP->UDP proxy that we run at various spots around the world. It works for fallback but I wonder if TCP has other value. I wonder if we should primarily speak TCP to the roots.
For that reason we can assume that TCP is marginally more likely to be reliable from endpoints than UDP, at least at the end of the long tail in the realm of crap NAT boxes, weird configurations, and badly implemented carrier grade NAT.
Here's the proposal:
In practice this would only be used for roots, designated relays, and possibly network controllers. In other words: infrastructure.
TCP is intrinsically slower for network virtualization due to the double-ack problem, but keep in mind that we only talk frequently to these types of hosts if we can't make direct connections. If we are on a non-braindead network we will make direct UDP links and use those and everything will be rosy.