Closed Thorlogy closed 5 months ago
the activation function was remembered in the state of the network, but not in the current instance and thus not applied after being changed. Thus: it was applied to the output neuron, but not the right one.
Output is now always updated. However if you switch activation while switching layers the input disappears.
Describe the bug It see
To Reproduce Steps to reproduce the behavior:
Expected behavior![activation](https://github.com/OpenRoberta/openroberta-lab/assets/19221359/1953872d-89c2-4b0c-863e-30d9bc50b958)
Screenshots
Device information