aai-institute / continuiti

Learning function operators with neural networks.
GNU Lesser General Public License v3.0
19 stars 3 forks source link

Add activation function after first layer #72

Closed MLuchmann closed 6 months ago

MLuchmann commented 6 months ago

Bugfix: Add activation function after first layer

Description

No activation function was applied after the first layer in the residual network. Also, the actual width of a network was given by width+1. Now, it correctly correspond to the width parameter.

To make the test scripts consistent to previous tests, the widths had to be adjusted by +1. Without this fix, the tests failed

Checklist for Contributors

Checklist for Reviewers:

samuelburbulla commented 6 months ago

The tests failed because the activation has to be applied even for depth = 1 (otherwise we have no universal approximation properties). I changed this and therefore was able to remove the changes in the tests. They now use smaller networks but as they still work, it's fine. Also added an assertion to make sure nobody uses the DeepResidualNetwork with depth = 0.