Closed hfxunlp closed 7 years ago
This is an issue we found with NaNs occuring when we run multiple threads. It would cause the training to fail on large datasets in some settings. To fix it, we added checks for NaNs after every update, and simply ignored the update when NaNs were present. You don't need to do anything about it, but we included the message as debugging info in case a problem occurred in spite of our fix. Please let us know if you see further problems that would be tied to this issue.
The dataset is to big, I do not really know how I could help you. Is it enough to give you the files glove needed and the training scripts we used? Thank you for your help @Russell91
Sure, I don't think it's being caused by the file, but the threading model and the OS you are using. Happy to have helped though.
What does this mean? and what should I do to avoid this?