Open tsaastam opened 4 years ago
As soon as you use more than one thread, the order of training examples will vary based on scheduling jitter from the OS. And the progression of random-choices used by the algorithm will vary. So you wouldn't necessarily expect the tallied loss values, at the end of any epoch or all training, to be identical or closely correlated.
Further, some have observed that stochastic-gradient-descent where multiple parallel sessions are sometimes clobbering each others' results may surprisingly work a bit better than pure, synchronous SGD. See for example https://cxwangyi.wordpress.com/2013/04/09/why-asynchronous-sgd-works-better-than-its-synchronous-counterpart/. So that might explain a somewhat "faster" improvement in loss in multithreaded situations.
However it's also quite likely the loss-calculation code, bolted on later & never really fully tested, implemented for all related classes (FastText, Doc2Vec), or verified as being what users needed isn't doing the right thing in multithreaded situations, with some tallies being lost when multiple threads update the same value. (In particular, the way the Cython code copies the running value into a C-optimized structure, tallies it there, then copies it back to the shared location could very well lead to many updates being lost. The whole feature needs a competent revisit, see #2617.)
Problem description
The word2vec implementation requires a workaround, as detailed in #2735, to correctly report the total loss per epoch. After doing that though, the next issue is that the total loss reported seems to vary depending on the number of workers.
Steps/code/corpus to reproduce
This is my code:
My data is an in-memory list of sentences of Finnish text, each sentence being a list of strings:
I'm running the following code:
And the outputs are (last few lines + plot only):
What is going on here? The loss (whether total loss, final-epoch loss or average loss per epoch) varies, although the data is the same and the number of epochs is the same. I would imagine that "1 epoch" means "each data point is considered precisely once", in which case the number of workers should only affect how quickly the training is done and not the loss (the loss would still vary randomly a bit depending on which order the data points are considered etc, but that should be minor). Here though the loss seems to be roughly proportional to 1/n where n = number of workers.
I'm guessing based on the similar shape of the loss progressions and the very similar vector magnitudes that the training is actually fine in all four cases, so hopefully this is just another display bug similar to #2735.
Versions
The output of
is