NVIDIA / DeepRecommender

Deep learning for recommender systems
MIT License
1.69k stars 341 forks source link

Autoencoder shape #24

Open Akababa opened 5 years ago

Akababa commented 5 years ago

What's the reasoning behind having the first layer smaller than the middle layers, unlike what the picture shows? Is it to reduce the number of parameters and overfitting, or simply the best configuration from the experiments?

paulhendricks commented 5 years ago

Hoping @okuchaiev and @borisgin can weigh in as well...

My understanding is that while numerous architectures were explored (different activation types, adding hidden layers, trying different numbers of nodes per layer, different dropout rates, different learning rates, dense re-feeding off and on, etc.), this confirmation had the best out-of-sample performance.

As to why this configuration performed the best in empirical experiments, I hypothesize that having a wide bottleneck layer helps the neural network learn a large number of "features" from the previous layer. Additionally, having a high dropout rate forces the model to learn robust features; e.g. with only 20% of the neurons active (80% dropout rate), the model must be extra careful when learning which features are most useful for the task at hand.

Thus, this configuration likely had the best out-of-sample performance because the wide bottleneck layer with the high dropout rate allowed the model to learn a large number of very robust features.

okuchaiev commented 5 years ago

Yes, I think @paulhendricks is right - wide middle layer with large dropout allows it to learn robust representations. Regarding first layers (e.g. first encoder layer) and last layer (e.g. last decoder layer) - those are actually huge in terms of weights because number data (x) is high dimensional. This, if x is around 17,000 and first layer has only 128 activations, then it means that there are 17,000x128 weights in the first layer.