Closed rjagerman closed 8 years ago
After some consideration, using the n
the modulo is probably a bad idea. It makes it very difficult to reason what server covers which keys unless you actually hash all those keys. A better way around it is to use something like DHT-hashing where the key space is divided amongst servers and we know exactly which server is responsible for which range of hashes.
The current implementation does not yet use hashing and instead just assumes a fixed-size key space ranging [0, n)
where keys are distributed evenly over the servers.
This feature is quite important, it is a major performance benefit for unbalanced key distributions. In the LDA case we typically work on text data that is subject to the power law distribution over words. Certain features will be very commonly used in samples while others less so. We need to distribute the key space in such a way that we prevent placing features that occur frequently on a single parameter server.
So far either DHT hashing or a cyclic modulo approach seems best. These approaches do require refactoring of the create
method in the client.
DHT has the added benefit that it can easily be extended to incorporate fault tolerance and instantaneous failover.
We wish to distribute (key, value) pairs over parameter servers. This can be done consistently via key hashing. To get started assume we have
n
parameter servers labeled1 ... n
. This information can be stored in a context-like object which is distributed to all workers. We can then hash a key and compute then
th modulo of the output. This gives us the appropriate server storing that key. For a large set of keys we will get a roughly uniform distribution for a proper hash function.