ntop / n2n

Peer-to-peer VPN
GNU General Public License v3.0
6.28k stars 943 forks source link

Is there any way to make the supernodes in active/standby mode but not in load-balance mode? #860

Closed galaxyskyknight closed 3 years ago

galaxyskyknight commented 3 years ago

Hello,

I have 2-3 supernodes and enable the federation mode, now the edge registered among them and seems they are running in load-balance mode since these two supernodes' load is similar, however, I wanna make these two or three nodes run ing 1:1 or 1:2 active/standby modes, is there any way to implement that? I remember there are some compile options to do this? is that dynamic configurable in running stage instead of compile period?

Also, I am wondering if there are only 2 nodes, that will be fine, if there are 3 nodes, how to make the top and conf?

galaxyskyknight commented 3 years ago

I checked the document, seems not support my scenario, there are only two approach: 1) load-balance based on the machine load 2) the ping RTT 'closer' one. but the situation for me is different, the supernode machines what I currently used are assigned with different bandwidth with a little big gap , they are running in the same operators land line ant their hardware are similar as well(that means their ping value or CPU load will be almost the same), however, the bandwidth for one is more than 10Mbps but another is only 2Mbps due to the cost limitation from operators , so what I want is when these two supernodes are working in Federation mode, they can be working in some 'active/standby' redundancy mode, and the bigger bandwidth machine will be the master and all edge could register on this supernode and another small node will register for none edges but only when the network connection issue happened the standby one can take over and flap when the master recovered, the benefit for this approach is that I can maximum use the bandwidth of the bigger BW machine for those edges have to be interconnected via PSP mode and speed up their file transmissions via the master node between those indirect connected edges(PSP) and in the same time I can still keep the high availability for all the edges/supernodes topology

Probably the approach could be manually assign the weight of each supernodes in supernode configuration file and broadcast to edge and edge initially registered with the smallest weight one at start up and try in sequential for different weight if the first one get failed? that seems a simple and easy way?

so, guys do you have any idea if this could be done in ver 3.2?

Logan007 commented 3 years ago

If you want to designate certain supernodes for main, backup, and emergency function, this could be solved by additionally implementing an optional supernode selection strategy (basically encapsulated in sn_seection.c) by MAC address: Connect to the supenode with highest MAC address. Apart from implementation, it would require to manually configure supernode's MAC addresses accordingly (-m at the supernodes).

galaxyskyknight commented 3 years ago

If you want to designate certain supernodes for main, backup, and emergency function, this could be solved by additionally implementing an optional supernode selection strategy (basically encapsulated in sn_seection.c) by MAC address: Connect to the supenode with highest MAC address. Apart from implementation, it would require to manually configure supernode's MAC addresses accordingly (-m at the supernodes).

dont' get that. how to configure it? from the document, there is no option for supernode with '-m', so you mean I specify the option '-m MAC' address in the supernode.conf for the master with higher address than slave one, is it?

galaxyskyknight commented 3 years ago

If you want to designate certain supernodes for main, backup, and emergency function, this could be solved by additionally implementing an optional supernode selection strategy (basically encapsulated in sn_seection.c) by MAC address: Connect to the supenode with highest MAC address. Apart from implementation, it would require to manually configure supernode's MAC addresses accordingly (-m at the supernodes).

dont' get that. how to configure it? from the document, there is no option for supernode with '-m', so you mean I specify the option '-m MAC' address in the supernode.conf for the master with higher address than slave one, is it?

Also, is it required to be normal compile mode? or DSN_SELECTION_RTT macro added compile binary is also ok to support you mentioned? currently all of them are compiled with DSN_SELECTION_RTT macro

Logan007 commented 3 years ago

by additionally implementing

Someone needs to code it.

galaxyskyknight commented 3 years ago

by additionally implementing

Someone needs to code it.

Oops.... I take it for granted that it has been done....

galaxyskyknight commented 3 years ago

can I make this as a pull request?

Logan007 commented 3 years ago

Sure, go ahead.

GreatMichaelLee commented 3 years ago

Sure, go ahead.

propbabaly I made a misunderstand, I am not a developer, I mean if someone could help to develop this function and merge it.

Logan007 commented 3 years ago

I see. Let me add it to the action item list for 3.2.

GreatMichaelLee commented 3 years ago

I see. Let me add it to the action item list for 3.2.

Great, thanks a lot!

galaxyskyknight commented 3 years ago

I see. Let me add it to the action item list for 3.2.

Great, thanks a lot!

@Logan007 Hi, big man:) I will be highly apperciate if you can implement this MAC based registeration policy control ASAP, it is very helpful to me, the load balance policy currently is a massive BW resource waste for the supernode servers now and the one server (potentially standby role) currently is not as stable as another(potential primary role) so if you could get it earlier done, it will be so great. thanks!

Logan007 commented 3 years ago

implement this MAC based registeration policy control ASAP

I fully understand your need and it got accepted to the list of future features.

get it earlier done

We currently are busy with testing and releasing 3.0. We therefore have frozen current development. As soon as 3.0 is released, we will have a look.