Closed Southclaws closed 7 years ago
Going to rewrite this, it was actually more challenging than I thought.
The model I came up with was this repeating timer background process that basically stepped through a list of server IPs one by one querying it and moving to the next one (all in coroutines, not sequential) but the logistics of handling mutexivity and finding an appropriate data structure just make me think it's not Go friendly and the design itself is probably fundamentally flawed.
Since Goroutines are cheap and there are only really 4-6 thousand servers online at once I may go with a goroutine-per-server approach that each run an independent ticker independently.
What I'd like to do is implement some kind of load-balancing timer system similar to how Y_Less did y_timers. I couldn't seem to find an existing library that did this already so it might be a nice spin-off project!
Not entirely sure how this will work yet but it will involve setting up a worker pool with a global interval then:
n / i
where n is workers and i is interval
The query daemon is designed to "crawl" sources for server IPs and add them to the list. This will most likely include the hosted list API on the official SA:MP domain and the SACNR monitor.
This system should be respectful of rate limits and efficient on the backend's resources.
The same code could also be used to query servers gathered via #4 - likely through some global, unique set of server addresses which store their last query time and are periodically queried in a round-robin-like fashion.