cerlymarco / MEDIUM_NoteBook

Repository containing notebooks of my posts on Medium
MIT License
2.08k stars 975 forks source link

Questions for the note book Predictive_Maintenance_SiameseNet #7

Closed kcg2015 closed 4 years ago

kcg2015 commented 4 years ago

Hi, Marco, really enjoy this notebook!!! I am working on a similar problem([https://github.com/kcg2015/fiber_cuts_prediction)]. In your SeameseNet architecture, you add a Dropout layer after the difference calculation of two encoders

L1_layer = Lambda(lambda tensor: K.abs(tensor[0] - tensor[1])) L1_distance = L1_layer([encoded_l, encoded_r]) drop = Dropout(0.2)(L1_distance) Could you let me know what is the rationale behind this? More importantly, would adding this dropout layer significantly reduce the overfitting? Thanks, Kyle

cerlymarco commented 4 years ago

Hi, thanks for your interest! I don't remember if this is the case, but its role is exactly what you underlined... nonetheless, the best way to answer is to try and see the difference with and without dropout or other regularization techniques with your data. All the best