Due to the decrease of the limit on the number of nicknames for which you can get uuid from 100 to 10, as well as the decrease in requests rate limit from 600 to ~240 per 10 minutes (gained experimentally), we can't now process all incoming traffic from one server.
During the initial development we had some doubts that we would be able to process the whole load from 1 server, so we thought over the options to solve this problem. There are 2 ideas:
1) Connect additional IPs to the server.
2) Use as a queue some remote executors.
The first option can work in case of placement on a dedicated server with an additional network card, but it's not suitable for some minimal hardware, which we focus on.
The second option is much more viable, because it allows us to share the load on many cheap servers, each of which will work from its own IP address.
The easiest implementation involves launching a web server that will send all incoming requests to the queue and return them as they are returned by the Mojang API. The master node will receive a config parameter that defines the list of IP addresses of the workers to which requests will be sent according to a round-robin algorithm.
But even in the simplest implementation there is a question of the need to ensure the work of the cluster because some of the workers may go down and will have to be temporarily removed from the pool of workers. This task can be solved on our side (which requires additional development) or using some middleware (for example, HAProxy). There is no final solution yet.
Due to the decrease of the limit on the number of nicknames for which you can get uuid from 100 to 10, as well as the decrease in requests rate limit from 600 to ~240 per 10 minutes (gained experimentally), we can't now process all incoming traffic from one server.
During the initial development we had some doubts that we would be able to process the whole load from 1 server, so we thought over the options to solve this problem. There are 2 ideas:
1) Connect additional IPs to the server. 2) Use as a queue some remote executors.
The first option can work in case of placement on a dedicated server with an additional network card, but it's not suitable for some minimal hardware, which we focus on.
The second option is much more viable, because it allows us to share the load on many cheap servers, each of which will work from its own IP address.
The easiest implementation involves launching a web server that will send all incoming requests to the queue and return them as they are returned by the Mojang API. The master node will receive a config parameter that defines the list of IP addresses of the workers to which requests will be sent according to a round-robin algorithm.
But even in the simplest implementation there is a question of the need to ensure the work of the cluster because some of the workers may go down and will have to be temporarily removed from the pool of workers. This task can be solved on our side (which requires additional development) or using some middleware (for example, HAProxy). There is no final solution yet.