Open mlhpdx opened 2 years ago
One advantage of this approach for "simple" UDP is that if one backend instance fails there will be an increase in packet loss for all clients rather than a loss of all traffic for some clients (as traditional load balancers would do).
A bit of an odd statement. Something that would be really usefull is to temporarily shift the load off the failing backend. Secondly since its about other balancers that use affinity to stick certain clients to certain backends, when one fails not all clients are impacted. Just the ones that are stuck to the failing backend I would think?
More to the point, I think this does belong on the list.
Since UDP traffic is by definition unreliable, clients should be built to expect packet loss. In a production environment this is a a preferred fallback mode (granted, for some not all use cases) for a service. It's better to choose increased packet loss since that's likely to be tolerated by "good" clients, particularly if it is negligible as it will be when one of many backends fails. The alternative is completely failing (albeit temporarily) all traffic from a subset of clients with affinity to the failed backend (while it is dead but not yet removed from service).
If it shouldn't be on the list, that's fine (I'll close this PR). But the software is undoubtably a useful tool for those serious about "sessionless" UDP services.
Ok, that makes sense. Thanks for clarifying.
Keep the pr open. Maybe @thangchung will merge it, but it can take a while. (I dont have any rights here)
This is a standalone load balancer to add to the Networking section. It's a sessionless/stateless UDP load balancer (which is difficult to find, but very useful for some protocols) that evenly distributes packets to back-end targets, and with low-latency target management (add and remove). Works on Windows as Linux.
https://github.com/mlhpdx/SimplestLoadBalancer