Open woriss opened 2 months ago
At the moment it will not aggregate bandwidth, only addr redundancy, but the current implementation was designed with the intent of making bandwidth aggregation easy to add in the future. It's one of the things I have planned for after I split the implementation into a daemon with an API the CLI talks to, so you can dynamically reconfigure the interface (for, e.g., roaming support).
As for multi-link configuration examples, all you need to do is add more addresses to recv_addrs
and peers[i].local_addrs
. You can also add new remote addresses, but it's not necessary.
If we could achieve bandwidth aggregation across multiple links seamlessly facilitating NAT traversal, that would be fantastic. Glorytun might offer some insights in this regard. Additionally, Multipath QUIC is an emerging technology. I suggest considering data stream balancing at the protocol level, leveraging MPTCP/Multipath QUIC, while Centipede focuses on tunneling.
https://multipath-quic.org/ https://github.com/angt/glorytun
I hadn't been thinking of it this way, but in the current implementation there is actually already a generic transport protocol within Centipede. Right now, Centipede actually doesn't inspect the packets it tunnels at all, it just shuttles them between TUN devices on different machines using that generic transport protocol (although this will change very soon so that you can effectively have networks of >2 peers).
I think that, as you say, it would probably be a good idea to cleanly separate the transport and tunneling parts of the code. I wouldn't be opposed to using an existing multipathing transport protocol, but I'm skeptical of using MPQUIC because (unless I'm misunderstanding) its stream-based nature will introduce unnecessary head-of-line blocking in tunneled connections.
Does this project allow for the aggregation of bandwidth across multiple links, thus enhancing stability and bandwidth? Are there examples of multi-link configurations available?