Closed artemii235 closed 5 years ago
I'd like to clarify our priorities here.
WebRTC provides us with hole punching, most of the time allowing for a GUI to reach a MarketMaker server even if it's behind a NAT or a mild firewall. (And the optional TURN servers can route the data for us if the hole punching has failed).
Is this why we want WebRTC? To simplify MarketMaker deployment behind NATs and mild filrewalls?
WebRTC also provides us with a way to switch communication from one carrier to another. That is, we might be running GUI on a mobile phone and entering a Wi-Fi zone. Then we might leave this zone and switch to 3G. Then we might enter another Wi-Fi. WebRTC was designed to carry high-demand video communication over such network changes.
Is this why we want WebRTC? To improve the rate at which we recover from switching the carrier?
WebRTC has a cost in that it's much more complex a protocol than, for example, REST. Are we committed to pay that cost? Or is it more like let's give it a go?
WebRTC is not a full peer-to-peer layer, it doesn't have anything like peer discovery or DHT. We must have a way for GUI and MarketMaker to discover each other and communicate.
You see, WebRTC was designed to improve the quality of real-time streams, like the high-demand audio and video streams. Yes, there are data channels, but the peers still must have their own way of communication in order to negotiate and re-negotiate the connection. Peers can't use a data channel until they communicated on their own and used ICE to negotiate a WebRTC connection. If working from behind NAT is our goal then we can't simply talk with the MarketMaker server. Any thoughts on implementing that ICE communication?
Hi @ArtemGr, thanks for your comment. The issue was created by HyperDEX team request. It's more like "research it later" when basic functionality of MM2.0 is implemented. Our main task to duplicate existing functionality using Rust and make it easy to maintain and add features like this (of course if the feature is required and we'll get benefit from it).
@artemii235 , appreciate your input.
I think we might benefit from clarifying our goals before the rewrite, as it might affect how we approach the rewrite and what design decisions we take on the way (cf. https://github.com/jl777/SuperNET/pull/926).
That is, we can research the technology later, as you say, but my question is not so much about the technology but about the goals it represents. The earlier we start to work on the goals the better, as it gives us more time to find a better solution. On the other hand, if the goals are not defined then we risk wandering aimlessly and missing good solutions.
Also, I have a working WebRTC implementation in one of my private research projects so I have already did a bit of research on WebRTC. In my book we maybe have already passed over the stage of WebRTC research and may discuss how it fits the MarketMaker rewrite.
HI guys, I think it is a great idea to clarify the goals for MM2 first, in some trackable and linkable way.
By the way, I am the new guy who took on the task to start reimplementing MM2 in Rust. Spent abt 2 effective weeks on research so far.. I considered WebTRC datachannels and ZeroTier. The latter is already added as an nng transport and supports hole punching. Not yet sure about their carrier switching funtionality -- reasonably it should be there --, but I anticipate adding that to zerotier is much less of a work than using webrtc data channels through the rust ffi.
From my point of view:
Is this why we want WebRTC? To simplify MarketMaker deployment behind NATs and mild filrewalls?
- yes, when trading process is initiated the selling node
should have direct network endpoint - buyer connects to it to exchange atomic swap data. It would be nice if selling node could be behind the NAT. Please note that selling node is running in command line mode, not in Electrum/Browser environment.
Is this why we want WebRTC? To improve the rate at which we recover from switching the carrier?
- yes, we want to have a mobile version of trading GUI and this requirement is very important. As I know there is 100% swap fail rate in case of losing the connection between trading nodes.
WebRTC has a cost in that it's much more complex a protocol than, for example, REST. Are we committed to pay that cost? Or is it more like let's give it a go?
- REST (JsonRPC) would be still required as GUI runs marketmaker application separately and we would need to keep backward compatibility at least for some time.
However REST is required to interact with locally running application.
For P2P interaction (especially when atomic swap is running) we will need a stable connection and it's ok to use more complex protocol in such case.
I am the new guy who took on the task to start reimplementing MM2 in Rust
Wow! @latsa, nice to meet you! D
and ZeroTier. The latter is already added as an nng transport and supports hole punching
On the outset this looks much better than WebRTC!
@artemii235 , much appreciated!
The one and only reason I recommended WebRTC was for web browser compatibility. Having the entire P2P network communicating over WebRTC means native web clients are possible.
If WebRTC is going to be a major disadvantage over other P2P libraries then it's fine to drop it. I just know a lot of people have asked about a HyperDEX web app and WebRTC is the only way to make this possible in a true P2P way without relying on a server<->client model or having centralised "bridge" nodes.
and WebRTC is the only way to make this possible in a true P2P way without relying on a server<->client.
I wish it would, but it's the other way around. WebRTC doesn't help you with decentralization in the slightest.
The way it typically works is that you have two peers, peer A and peer B. With the help of a STUN server peer A gathers its endpoints (IP addresses and ports). Likewise peer B gathers its endpoints (IP addresses and ports). Having gathered the endpoints, you have to somehow exchange them. E.g. you have to show peer B the endpoints of peer A and vice versa. The peer to peer connection will be initiated then. But how you pass the endpoints from peer A to peer B and back is not a concern of the WebRTC spec. Typically you'd use a normal central server to implement that endpoint exchange.
The one and only reason I recommended WebRTC was for web browser compatibility. Having the entire P2P network communicating over WebRTC means native web clients are possible.
Good to know. I wonder if one of the existing Komodo peer to peer networks could be reused to implement the discovery and endpoint exchange required for the WebRTC to work.
Ahh, it's been quite a while since I've played with WebRTC, I totally forgot about the STUN bootstrapping.
The point still stands though, as far as I'm aware, WebRTC is the only API available to the browser to allow it to make direct P2P communication. Maybe we could bypass the STUN server and build in our own DHT with some hardcoded bootstrap nodes for discovery? Then all communication could be done via WebRTC.
I wonder if one of the existing Komodo peer to peer networks could be reused to implement the discovery and endpoint exchange required for the WebRTC to work.
That would also be a great solution.
Just to clarify, native web support is not a priority, getting mm stable with a rich API is.
If we need to use non browser compatible libs to achieve that then that's ok.
We can always implement a web client as a thin client that connects to a user's trusted server or alternatively tunnels directly though to the network via some WebSocket -> marketmaker bridge servers.
WebRTC support was just something I mentioned in Slack that would probably be required for a native web client were we to ever build one.
Thanks for expanding on this, @lukechilds !
The way I see it, we'd like to have a native web client some day, and knowing this we can skip a couple of unnecessary future rewrites by starting with WebRTC early.
We don't have to implement the full-fledged decentralized discovery at first.
Instead, the client might use the normal XMLHttpRequest
(or the existing nanomsg) to exchange the endpoint information with the MM server.
The rest of the communication can gradually be ported to WebRTC data channels.
Once everything except the endpoint exchange works over the WebRTC data channels, we can: a) When the MarketMaker is behind NAT and not directly accessible from GUI, let the users perform the endpoint exchange manually, copy-pasting the list of endpoints from the MarketMaker to GUI and back by whatever means she have, including the QR codes. b) Work on automatic decentralized discovery network. c) And/or implement adapters that would perform endpoint exchange over existing decentralized networks (email being one of them = ).
Yeah this seems like a good plan forwards.
FYI, in HyperDEX we already have local thin client communicating with remote marketmaker
node working.
It's a bit rough around the edges but it works.
We've discussed this in more depth with Artem and I'd like to give air to some of our thoughts.
There seems to be two vectors where the MM networking stack can be improved. 1) Communication between GUI and MM. 2) Communication between MM and MM.
It is important to distinguish between these because there seems to be no "silver bullet" technology and we have to take some trade-offs, and different priorities apply to vectors (1) and (2).
For (1), GUI-MM, the priority, as far as we know it, is currently the support of parallel queries. E.g. for a typical HyperDEX deployment I think we're happy to talk with MM via the HTTP stack, the only problem is that it's currently impossible to, say, have a slow running "portfolio" query and simultaneously get quick answers to other queries.
For (2), MM-MM, the priority seems to be the NAT traversal. The nature of the IPv4 address space makes it very likely that MM users will be behind NAT at one point or another. Especially with HyperDEX being marketed as the download-install-and-run app rather than an internet server app. People will be running HyperDEX and the bundled MM on their normal consumer devices (notebooks, tablets) behind their normal WiFi NATs. Meaning that some of the swaps will fail because the person in question can not be reached from the other side. A huge portion of the future user-base (where hopefully both the sellers and the buyers will be using HyperDex and trading coins from their PCs) is thus affected and hampered in performing the basic functionality of the exchange.
Separation between the two vectors
allows us to handle (1) with relative ease
and broadens the range of solutions we can consider for (2).
@lukechilds , please check this out, are we on the same page with the GUI team?
For (1), GUI-MM, the priority, as far as we know it, is currently the support of parallel queries. E.g. for a typical HyperDEX deployment I think we're happy to talk with MM via the HTTP stack, the only problem is that it's currently impossible to, say, have a slow running "portfolio" query and simultaneously get quick answers to other queries.
Correct. HTTP is more than fast enough locally as long as MM could quickly respond to concurrent requests. Ideally, the portfolio command would not be slow either by improving caching.
Thanks, @sindresorhus , we'll be stirring the porting effort in that direction.
Incorporating libtorrent DHT into MM2 covers the ICE part. That is, we can use the DHT to exchange the IPs and ports of the peers, then passing this information to WebRTC and establishing a NAT traversing connection between the Alice and Bob with it.
Wether we'll go that way depends largely on the quality of the WebRTC libraries later on, when we'll be evaluating the direct communication paths. But it's interesting to note that with DHT we're one step closer towards using WebRTC between the MM2 nodes, and that with the ICE problem solved WebRTC is once again an option.
Specifically, WebRTC data channels.