Open kswannet opened 1 year ago
I have exactly the same problem. With more complex models, the output becomes completely confusing and not well comprehensible.
Update:
I've found that adding the the option "hide_recursive_layers"
to the row_settings
did improve the output a lot for my case.
When using more than one nn.sequential modules, and they both use the same activation functions defined under init, torchinfo splits the single nn.sequential in separate ones at each activation function call.
For example:
results in the following summary:
Even though the
secondNetwork
is unused, changing one of theself.actFun
calls to e.g. nn.LeakyReLU fixes the problem:Is this a torchinfo problem or am I maybe doing something wrong here?