waleedka / hiddenlayer

Neural network graphs and training metrics for PyTorch, Tensorflow, and Keras.
MIT License
1.79k stars 266 forks source link

This may be a bug about pytorch visualization #57

Open zhhiyuan opened 4 years ago

zhhiyuan commented 4 years ago

I used this tool when using the pytorch version 1.1.0. When building the model, I found that if the dropout is not used between the two fully connected layers, the final result in the visualization is wrong, and after using dropout, the error is solved, is this a design flaw? Here is the code, I use two linear regressions and Relu to input a sample when visualizing. When no dropout is used, the full connection layer input is 1x400. After two full connections (visualization result is lienar>Relu x2, only one full connection will be displayed. The layer is multiplied by 2) and the output is 1x120, which is not consistent with the model I designed. After using dropout, each fully connected layer is output separately. The result is consistent with the model I designed. `

not use dropout

nn.Linear(16 5 5, 120), nn.ReLU(), nn.Linear(120, 84), nn.ReLU(), nn.Linear(84, 10)

Visualization output 1*120

use drop out

nn.Linear(16 5 5, 120), nn.ReLU(), nn.Dropout(), nn.Linear(120, 84), nn.ReLU(), nn.Dropout(), nn.Linear(84, 10)

Visualization output 1*84

`