Closed Logan007 closed 3 years ago
For users, a relatively stable mainline version is needed, because progressive versions change all the time, users will not be able to accept such incompatible changes.
As far as I know, the "master" branch seems to be unmanaged for a long time. We might as well use "master" to replace the current "dev" branch. The current "dev" branch becomes a bold and innovative development branch . In the entire code repository, set "master" as the default.
the "master" branches carry out conservative iterations, and the "dev" branches carry out bold functional innovations.
If we open a "dev" branch to develop new features on it, but with the development process, the "dev" branch will be farther and farther from the "master" branch. Once the amount of code is large, it will be very difficult to merge in the future .
Therefore, whenever the "master" branch has lightweight new features or bug fixes, the "dev" branch must also make the same changes. When major versions are released in the future, the "dev" branch will be fully merged into the "master" branch.
@fengdaolong I like that idea. Once we have reached 3.0, we could use dev for "cutting edge and disruptive ideas".
I would like the edge to have the capacity to run my own script(modify something about the connection) at the very last stage when we start edge.
run my own script(modify something about the connection) at the very last stage
Do you want to achieve something similar to what we discussed here earlier?
At what specific point do you want to call it? Before dropping elevated privileges? Before serving the virtual network device?
The basic idea would be easy to implement (a call to system()
plus another command line parameter and error handling). But, to be honest, I am not sure of the purpose. @tlsalex, please provide some more details of your idea to convince me!
ah, please ignore my first comment, as I hope N2N has more advance feature.
Let's say we use port 80 (is not limit on most of firewall) on SN , I Hope SN can listen on tcp/80 and udp/80 at the same time. if udp/80 is not reachable because of the strict network environment, edge still can reach tcp/80. and use SN to rely traffic. Which is using udp as preferred method to build P2P connection , and use tcp as a fallback to relay traffic.
udp2raw or udp2tcp can be fallback methods from udp to tcp, but I hope N2N does that by itself.
The idea is from OneTier .
@tlsalex As both of us probably remember very well, this feature has often been discussed. I would gladly add it to the list! Unfortunately, so far, no one has shown up and volunteered to implement it.
I will only add features to the list of which I am confident that some (or at least one) of the persons around is capable to implement it. Maybe, a bare metal network expert could do it…? Let me know!
@Logan007 Please add it to the list, hope some network experts will join this project to implement it, I have time to wait for that.
I have time to wait for that
Keep in mind that is is the list specific to the planned 3.0 release which shall not be delayed due to this feature's implementation. Therefore, I have added it as optional item.
Two suggestions for the 3.0 release:
(WINDOWS) Update documentation for -d <device name/adapter id>
command line option. In Linux you currently use it to rename the adapter as the doc suggests but in Windows you use it to choose a specific adapter by it's device ID or name. You have this functionality in choose_adapter_callback()
function in win32/wintap.c. Documentation of the -d option should reflect this.
\
Why is choosing a specific adapter useful? Because there might be multiple TAP adapters installed in a system and using a specific adapter will allow another program to do netsh and nvspbind commands to control the right adapter.
\
Specifically in my case I want to disable Microsoft IPv6 stack and VirtualBox NDIS6 Bridged Networking Driver bindings to reduce the possibility of complications (VirtualBox binding is notorious for causing problems [see Google search for "virtualbox ndis6 openvpn"])
\
and I also set the network interface metric to 1 to make the VPN to be the first route for 255.255.255.255 ipv4 broadcasts. My use case is LAN retro gaming over Internet and Windows routing is made stupidly so the 255.255.255.255 broadcast only sends to the first interface with lowest metric in the routing table, unlike other OSes like Linux which send the 255.255.255.255 broadcast to all available adapters (as it should have been done in Windows too, but isn't)
Implementation of UPnP / NAT-PMP support (#147). This will greatly increase the chances of a successful (and stable) P2P connection in a typical home environment, as one or both of the technologies are usually available and enabled by default in most home routers. This should be behind a build flag though and probably separated in a different file, to keep n2n as lightweight as possible and to not add unnecessary clutter for people who do not need this functionality.
\
The application of this is rather trivial using readily available (& lightweight) miniupnpc and libnatpmp libraries. I have done this myself for a project of mine and could implement it myself and do a PR, but I'm currently too deprived of time to do this at this time. If no one else wants to do it though, I'll probably do it later but not in time for the 3.0 release.
edit: Suggestion for this repository would be to enable the new Github Discussions feature, to keep Issues tab from cluttering with questions and non-technical issues and have a separate tab for them
@anzz1, thank you for your suggestions.
You are so right, documentation should point out platform specifics. That will definitely be an action item on our list.
upnp has been brought up over and over again for a long time. So far, no one has been able to implement it. In case you want to volunteer, that is great news – I already can hear people cheer!
Take your time, 3.0 will probably not happen too soon. For that release, apart from documentation, we mainly focus more on protocol changing and compatibility breaking items – we want those to be finished by the 3.0 release to keep all the future 3.x versions compatible. upnp would make a wonderful addition that definitely would not break the underlying protocol (packet format, ...) as it only affects the way an edge interacts with its local environment. So, that could also easily go into 3.2.
We probably will discuss enabling the discussions feature... let's see how it turns out. It's still beta, isn't it?
Greetings!
1) Just wanted to say that yes I would really love for the metric to be set to one by default when it comes to using windows edge machines, especially for those like me who are indeed into retro LAN gaming.
2) Having the wonderful DHCP server that is included on the dev branch included with the stable 3.0 release. This would be a godsend when creating supernodes and wanting to automate the process of handing out IP addresses.
@ndo360 thank you for your feedback.
I am glad to hear that the auto IP address feature is helpful to you. Thanks to @fengdaolong who implemented it! There are no plans to remove the auto IP address feature from dev, so it most probably will make it into 3.0. Just be aware that it only works 100 % reliable with not more than one supernode due to the current nature of the feature. This does not mean it could not work at all with more than one supernode – it could work with more federated supernodes if there are no collisions of the edge's description (-I ...
-provided or local hostname otherwise) hashes which are used as base to assign an IP address.
The metric idea as also suggested by @anzz1 sounds interesting. I will put it in the optional category because we would need to find someone who is able to code it (Windows specific), probably with a new command-line parameter. Any volunteers?
metric for win32 @ PR #606
You do not want to set metric to 1 by default as it messes with the default broadcast route, but you do want an option for it. metric=0 , which means automatic metric setting should be the default.
Love the phenomenal work done so far! I wanted to verify if 3.0 will release with windows binaries, as well as the DHCP server implementation that I noticed on the current dev build.
Keep up the good work! 😄
if 3.0 will release with windows binaries
I am not sure about that. But from my own recent experience of my very first own Windows compilation run, just a few days ago, I am able to report that it is not too hard to get a working binary on Windows. Have you seen our recently updated building documentation?
Apart from that, @lucktu might decide to publish binaries.
as well as the DHCP server implementation
Please see my answer above on the Auto IP Address Feature; I do not call it DHCP as our feature only assigns IP addresses whereas DHCP can do much more.
if 3.0 will release with windows binaries
I am not sure about that. But from my own recent experience of my very first own Windows compilation run, just a few days ago, I am able to report that it is not too hard to get a working binary on Windows. Have you seen our recently updated building documentation?
Apart from that, @lucktu might decide to publish binaries.
as well as the DHCP server implementation
Please see my answer above on the Auto IP Address Feature; I do not call it DHCP as our feature only assigns IP addresses whereas DHCP can do much more.
Thanks for the clarification and response! I'll keep my eyes peeled in anticipation for the 3.0 release. 😄
Recently, when I used n2n to create network transmission files in various locations, I accidentally discovered that some ISPs limit the upload speed of a single link. When using super nodes, the total transmission speed of multiple file transmissions is only 300KB/s. I am trying to do it without using n2n, The same is true for sftp single file transfer. But when I try to use sftp direct access without using n2n, the total speed of downloading multiple files at the same time can reach 70Mbps! I am sure that a direct connection is established between the n2n super nodes. So can n2n provide multi-threaded connections in future versions to increase the transmission speed?
I accidentally discovered that some ISPs limit the upload speed of a single link.
@pokebox, thank you very much for sharing your interesting observation!
I am trying to do it without using n2n, The same is true for sftp single file transfer.
Do I get it correctly that without n2n, single sftp transfer is slow as well?
So can n2n provide multi-threaded connections in future versions to increase the transmission speed?
Multi-threading might be a bit hard to implement, this has been discussed at some earlier point. The current n2n data structures which get accessed a lot every now and then would be required to be guarded by mutexes – for each and every access. Also, I think that edge's desktop CPUs are not n2n's bottleneck.
But we could think about multi-link support; not necessarily requiring multi-threading, just serving the different links round-robin. That might counter this issue much better. But I just am not able to envision the technical details yet, e.g. how to handle different ports for each partnering peer while making sure they belong together and do not indicate a NAT-induced port change...
It is a very interesting idea whose realization brings its own challenges due to which I do not think that anyone will be able to bring it to life in time for 3.0. Maybe an optional extension of some later 3.x version?
As I would like to encourage everybody to think about how we can add this feature, @pokebox, may I ask you to please open a new plain issue with your observation and proposal, just the same text as in your post?
Do I get it correctly that without n2n, single sftp transfer is slow as well?
Yes, single sftp is also very slow without using n2n.
But we could think about multi-link support; not necessarily requiring multi-threading
Multiple links may also be a good idea.
this item:
prepare inter-supernode communication
Means SUPERNODE can listen on multiple ports at the same time ?
prepare inter-supernode communication Means SUPERNODE can listen on multiple ports at the same time ?
Unfortunately not. It just means we want supernodes to be able to communicate among each other, maybe using their own message format, some kind of "federation talk", e.g. to exchange information related to some certain feature. As there is no application yet for using such a format, we might even well postpone it until we see such a use-case.
Listening on multiple ports is still an open feature request. As I have learned in the course of TCP implementation, there are a lot of things to consider for proper socket handling. My best guess is that further additions to socket handling (more ports, multi-connection, ...) would require a complete new socket-and-connection-abstraction-layer in n2n to keep it somewhat clean and maintainable.
I have moved two items to the Optional section:
The inter-supernode communication basically is not required yet as we do not have supernode-side features requiring such a communication yet. This can be done later if neccessary just by adding a new message type then.
The planned supernode management port command to reload communities is not required either now. Of course, it would be nice to have as reloading communities without having to restart the whole supernode might relieve daily work. But how often will the community file change? I will keep it optional as this might become interesting again if we ever implement advanced edge authentication with supernode-side edge auth data changing more often.
By this, the intended to-dos basically boil down to documentation-related tasks inside and outside the code. Why do those items always remain last? :wink:
AEAD stream ciphers like chacha20-ietf-poly1305 and aes-128-gcm😉chacha20-ietf-poly1305 in libsodium is faster and safer for many embedded devices.
Hello @JimLee1996, thank you for your feedback which is very welcome here! Please let me take the opportunity to make some comments.
| libsodium
We once decided to not rely on an external library by default to keep the weight gain low, especially on embedded devices and / or routers etc. – openSSL support can optionally be enabled though. We certainly are open to contributions allowing for optional support of other additional libraries such as libsodium, "plug-in" style through compile-time option, just as with openSSL support.
| faster
I think that the plain C versions of the four implemented ciphers run reasonably fast on all platforms, the code contains optimizations for some platforms (ARM NEON, Intel SSE, Intel AES), some other might follow (ARM AES). Overall, I consider it a good compromise for mixed environments and settings. Platform-specific contributions in this field are highly appreciated.
| safer
My guess is that you refer to Poly1305 and GCM for message authentication. Actually, we do authenticate the packets in header encryption mode (-H
at the edge) but do use a different encrypted message hash, you will find some detailed explanations in this section. This scheme has been retro-fitted into the existing structures without adding to packet size and might explain the reason for being the way it is. Honestly, I do not see how it is unsafe. Please let us know about any loopholes you see; I'd be very happy to discuss. However, authentication might change in 4.0 for which possibly going full public-key opens up a whole variety of new options again.
Concerning the ciphers, I think we are well off in having two stream ciphers (ChaCha20, SPECK-CTR) and two block ciphers (Twofish-CBC/CTS, AES-CBC/CTS) at hand to chose from. I would not recommend to change AES-CBC/CTS to AES-GCM because I consider a strong block cipher a very valuable option.
What do you think?
Thank you for the detailed explanation, @Logan007 !
I do agree with the decision to not rely on an external library. It is lightweight and easy to build, especially for embedded devices. And I see the header encryption part, actually it is safe for now.
Thanks for your great contributions again, and look forward to version 4.0!
It's great to have active development of N2N again. We use N2N to link our on-premises cluster to our cloud cluster nodes, and require the cloud nodes to communicate both with each other, and nodes in the on-prem cluster.
A common limitation we have found is that the cloud nodes communicate with each other not via their cloud instance local IPs (which would be superfast, and cheap) but by the public IPs they register with the on-premises edges/supernodes (which is superslow, and costly, as it involves them sending the packet over the public internet to our on-premises edge, which routes the packet back over the public internet to the other cloud instance).
To give an example, when Cloud Edge 1 sends a packet to Cloud Edge 2, it ends up being be forwarded via On-Premises Edge.
I don't know if there is a way around this problem in N2N 2.9.0 which we are testing. We have tried having the cloud edges register only with a local cloud supernode, which in turn federates with the on-premises supernodes; but this did not seem to change the routing behaviour.
What I suspect may be needed is for edges to have the capability to register multiple local IPs with each supernode; then when Cloud Edge 1 sends a packet to Cloud Edge 2, it can choose an IP for Cloud Edge 2 on the same network where possible.
Please let me know if I've explained this well and/or you can see a solution to this challenge.
Thank you for getting to this. Actually, I realized that happens in one of my home networks as well when edge IP address switches between local IP address and global IP address (NATed but with some fixed port) back and forth. every two minutes or so.
The edge nodes usually bind to ANY interface. The local edges get detected by multicast. This leads to the behavior observed.
As internal structures do not support multiple sockets per edge yet, the best preliminary solution would be to have the edge bind to a specific local interface (needs to be implemented though) and deploy one supernode per local cloud to federate those as you already did.
For now, implementing the interface binding would be easier than supporting multiple sockets per peer which would require some additional logic to correctly address the other peers, i.e. choosing the right socket by context. I think it could be done quite quickly if time permits (src/edge_utils.c:217
needs to be fed with some parameter from command line). What do you think?
However, as first thoughts on a 4.0 design emerge, I am confident that this issue will naturally be addressed by that design then. But it will take more than a year from now I guess.
@Logan007 Thanks for the swift reply. I can see the challenges you describe in implementing multiple sockets per edge. In the meantime, binding to a specific interface sounds like it could be a useful workaround, if it ensured that the Cloud Edges communicates with each other only over the specified interface, while still allowing full networking with the edges in our on-premises cluster. If you were able to draft that patch, I'd be happy to review it and test it out on Google Cloud.
(While I'm writing, I feel I'd better mention a bug I'd like to report that may be connected to the current P2P/multicast discovery logic that may related to edge binding to any interface. What I experienced, was the edge
process sending UDP packets from the configured -p 11114
port over the edge interface itself, which caused a flood. This appeared to happen after our on-prem edge received an Rx REGISTER
from the N2N private IP assigned to another edge's edge interface. I assume that edge
should never send edge traffic over the edge interface itself, but that is what I appear to have seen. There may be better solutions to this - adding -S1
to the edge
command line appears to work around the issue - but binding edge to a specific interface would also appear to be one.)
if it ensured that the Cloud Edges communicates with each other only over the specified interface
I hope so, not tested yet though. :wink:
But I am positive that, along with -l <local IP of local supernode>
, there is a good chance to get it working.
In addition, you would need to compile the edges with RTT selection mode to make them definitely connect to the local supernode. Only if that one fails, they will register to a remote one. Hope the clouds are far enough away from each other to distinguish the supernode ping times. You can watch them at edge's management port output with RTT compiled in the load
column of management port output.
So, I will consider some -b <some local IP address>
option sometime soon.
If you were able to draft that patch, I'd be happy to review it and test it out on Google Cloud.
I will pick you up on that! :wink:
while still allowing full networking with the edges in our on-premises cluster
Note, that the edges most probably will not be able to communicate to edges of other locations directly then because local supernode only sees local addresses to propagate. This far-com will presumably happen through the federated supernodes then.
I assume that edge should never send edge traffic over the edge interface itself, but that is what I appear to have seen.
That's a very good point! Thank you for reporting! :+1:
If I get an opportunity, I will dive deeper into that. I am not aware of how sequence for opening TAP and connecting to supernode, including the binding, is. As re-connects can occur, including the binding to the then already existing TAP, we might need a more general way to ignore incoming n2n-packets at the TAP.
@struanb, please test #753 and let us know what you find.
Thank you @Logan007 that's incredible, I will!
P.S. Just a thought regarding the other idea of the edge sending packets by the lowest cost/fastest route to other edges: I am not sure this requires multiple sockets as such; just maintaining a list of available IPs for each edge, and addressing each packet with the best destination IP for the destination edge; then, if I'm not mistaken, I think the kernel will route the packet automatically out the appropriate interface.
I think the kernel will route the packet automatically out the appropriate interface.
Yes, it will.
just maintaining a list of available IPs for each edge
Internal structures just do not support it yet. Each peer has only one (remote) socket / IP address assigned. Creating a list of sockets would require a lot more maintenance (so far unseen socket, is it a new socket of edge restarted, or just NATed port change, or an additional, different one? should the supernode purge longtime unseen sockets? always find the right address to use) and ...
addressing each packet with the best destination IP for the destination edge
... brings up some other questions when it comes to forwarding socket information of other edges via PEER_INFO. Of course, a supernode could check if an edge has an IP address which is part of one of the supernode's local networks. But what if a local peer happened to have registered to another supernode which just forwards the request? It would deliver the global IP address then because the supernode in charge is not aware that the edge in question is local to the edge requesting... RTT selection criterion is mandatory then – diminishing to some extent the benefit of the intended load balancing.
Although your idea is great and makes absolutely sense, I do not see these things in 3.0 due to the complexity involved. Who knows if someone is going to implement it for 3.2 or 3.4?
Your scenario is very interesting and I will definitely keep it in mind designing the next versions. May I ask if the local addresses to be preferred are part of some adapter's assigned network address including the sub-network? For example, do all your nodes at the same place share local addresses of the same sub-network and can this full sub-network be found at ip addr
output? This would become an important indicator when evaluating which IP address to prefer.
I will!
I really am looking forward to hearing back from you! Especially, I would be interested if and how nodes from different sites can still communicate directly p2p or through federated supernode forward only.
Are we ready for a 3.0 release?
There is another very important function that needs to be completed. Edge's local area network point-to-point direct connection function has not been very easy to use. It is currently implemented through multicast, but it is generally not satisfactory, I think the -L function of meyerd/n2n should be used for reference.
Edge's local area network point-to-point direct connection function has not been very easy to use.
We actually do not have that... :wink:
It is currently implemented through multicast ...
which still requires to provide a supernode -l
... but it is generally not satisfactory
as it makes it widely visible?
I think the -L function of meyerd/n2n should be used for reference
What exactly does -L
do? As far as I get it, it creates a local encrypted point-to-point link without supernode.
The current LAN direct connection function is realized by using multicast. The feature of multicast is limited to the discovery of multiple hosts under the same switch. It cannot be used across switches. When there are VLANs and complex NATs in the LAN, between hosts It cannot be directly connected at all and can only be forwarded through the super node.
The general idea is that each edge reports the real IP of its host's LAN to sn and then tells the other edge. When the edge send data to each other, they first try to connect to the other's LAN IP.
When there are VLANs and complex NATs in the LAN, between hosts It cannot be directly connected at all and can only be forwarded through the super node.
So, -L
would indicate the edge's own local address to propagate?
The general idea is that each edge reports the real IP of its host's LAN to sn and then tells the other edge.
This would also solve @struanb 's issue, I guess.
I have been pondering a similar idea for 4.0 and basically see the following issues and possible work-arounds:
hostname
(if -I
not provided) might already be considered border-line. So, local socket should not be advertised by default, maybe only iff -L
is provided.192.168.1.0/24
address range. Edges at both networks would try to connect to assumed peers which in reality are located at the other network. Could be solved by some logic though ...n2n_sock_t
typed local_sock
field for each peer
and edges could advertise it with each REGISTER_SUPER. The PEER_INFO could propagate it to the other edges. Edges would try to register through both sockets and stick to the local sock for outbound packets if traffic has already been successfully received through it.For the general approach with lists of sockets allowing several local sockets WiFi, LAN, ... and even far-com, I am not quite convinced to see it in 3.0 yet. The quick-and-dirty solution limited to one local sock might make it into 3.0. Any volunteers?
It would be better if the existing main socket can be reused, but we need to try it, It is not recommended to use additional sockets.
Maybe this is QUERY_PEER and PEER_INFO's true destiny...?
@Logan007 Personally I feel like upnp should be finished before 3.0 is released. 😘
Well, so far, it has not been done yet...
As it would not affect existing message format, it can easily be added later. Also, we still need features to be implemented for 3.2 and beyond! :wink:
UPNP is of course very important, but it needs to wait for friends who have time to implement it. My point of view is consistent with logan007. Before the 3.0 release, the focus is to complete the content related to the protocol message, and try not to affect compatibility in subsequent versions.
Moreover, the current version has a lot of strange problems like this. I suggest users with development capabilities to carefully review the code and debug it repeatedly, because people who use it are more likely to find problems. Logan007 alone, time and energy are too limited.
Yesterday communication about strengthening the point-to-point direct connection function in the local network is very important. If there are no other volunteers, let me try it, but I have very little time, not necessarily so fast, @Logan007 , please add to the to-do list.
I have seem to have fat fingered the sweat smile emoji... my bad 😅 @Logan007
But that is indeed fair what you all have said. I guess the main reason I felt that UPNP should be implemented was that I felt it strengthened the ability to get clients into the network without the annoyance of port forwarding. (Or teaching them how to do so 😅)
Once the 3.0 build is bug free... for the most part... I'll support the move to make a full 3.0 release. 😃
Given that UPNP is well known as a security hole, and also given that it would not change the wire protocol, I would vote not to wait for that before releasing.
On the -L switch front, it allows you to specify a single extra IP address and the edge then registers two addresses with the supernode - the usual public address and this extra address - so all the issues with handling multiple sockets per edge need to be addressed.
To help mitigate the perceived need for this feature, Multicast /should/ work across multiple switches (I have not tested this with the current crop of crappy home switches, but it is a key component with IPv6.. so...) and I would suggest that someone who is knowingly dealing with a complex NAT or multiple VLAN environment would also be the kind of person who could run a local supernode to optimise their routes.
Thanks for discussing the -L
question from differently angled views. And directly bursting from the bottom of my heart, you guys rock! :orange_heart:
Although I am not so sure if we really really need this feature, I think it can prove useful for some networks. But especially as the 4.0 track definitely will have to handle several sockets' information, this might turn out as a good exercise to gain some experience in that field and learn from it for future endeavors.
So yes, let's do it! Let's just shoot the optional -L
provided socket to the supernode inside the REGISTER_SUPER packets and have it distribute by the PEER_INFO queried from time to time – not sure if this happens on a regular basis though. Maybe, the REGISTER messages could carry forward this information, too?
Edge's peer
struct gets an additional field for this socket information (along with a use_it
flag, another last_cookie
jar as well as a correspondinglast_updated
field). If filled, the edge will send out two REGISTERs – one to each of the sockets. The packets need to be different, either by a flag or better just with a different cookie.
On arrival of the corresponding REGISTER_ACK, we check which one got acked (maybe both). If we find by cookie that we process the locally flagged one, we flag that peer to be talked to through the local socket (use_it
) and note the current time. Repeatedly occurring purging needs to make sure that the flag is lowered again after a while if not re-set by another REGISTER_ACK round.
What am I missing here?
Code will be kind of ugly because we deal with a few extra fields instead of the more general case (list of sockets sorted by some criteria). But let's keep that for 4.0...
It will take some time though. I cannot start before next weekend and it might take two to three weeks. Practical testing will be a hard thing to do for me due to the lack of most of these structures in my natural networking environment. I will mark it as experimental meaning most probably no support.
Any more thoughts on this?
Preferences vote
I would really like to see a formal release happen. We appear to have hit all the critical items and some of the optional items.
What further do we need to make a 3.0 release?
Testing, I guess ;)
Ha!
I guess you mean real-life testing (which is hard) - and not simply adding automated test coverage (which is simple)
Logan
)Logan & fcarli3
)Logan
)Hamish Coleman
)Logan
)Logan
)Logan
)Logan
)Logan
)Logan
)Logan
)Optional:
Logan
)Logan
)-
)anzz1
)anzz1
)Hamish Coleman
,Logan
)Logan
)-
)--pre-up
,--post-up
,--pre-down
,--post-down
CLI options to run scripts #694 #743 (assigned:-
)Logan
)This is about what technical features to include / change for an upcoming n2n 3.0 release. This post will regularly be updated to reflect current changes.
The list above is absolutely not carved in stone. Please share your thoughts for discussion and do not hesitate to propose any of your ideas!
If you want to contribute to some of the listed features or to a not-yet-listed feature, such as upnp support, or… let us know!