Open breznak opened 9 years ago
Drop-out and randomness are common techniques in deep learning to force the system to learn more robust and distributed representations. Because the system is optimizing an objective function in a supervised way, the system is forced to find solutions that can deal with the randomness.
The HTM already has randomness inherent in a number of places and I believe these (plus the way sparsity is constructed through inhibition) already achieves the main purpose. The HTM algorithm is also not a supervised technique. An HTM with reasonable parameters that match the math constraints will be inherently robust to noise - we have tested this extensively. I doubt adding additional noise to the input during learning will actually improve performance. However I have no problems with people trying it out.
As techniques for improving generalization and stability, did we try them? How it worked out? I can't find them in the code.
CC @subutai @chetan51 ?