beckstev / MachineLearningSeminar

MachineLearningSeminar SS19 TU Dortmund
MIT License
0 stars 0 forks source link

A more sophisticated activation function - worth a shot! #12

Closed beckstev closed 5 years ago

beckstev commented 5 years ago

Mark S. refered in our project to the following paper which presents a learnable activation function PreLu and a more sophisticated way to initalize the weights. Fortunatley, keras already offers pre-implemnted methods for PreLuand for the Kaiming/He-Na.

beckstev commented 5 years ago

Also a good post which can be useful for the final report.

beckstev commented 5 years ago

Implementation difficulties: PReLU does not support natively variable input sizes.

Saved for later: https://www.researchgate.net/post/Proper_Weight_Initialization_for_ReLU_and_PReLU https://github.com/keras-team/keras/issues/7694

beckstev commented 5 years ago

Giving PReLU a axes over witch the parameters are shared fixed the issue - Source: https://github.com/keras-team/keras/issues/7694#issuecomment-479059993

Docu:

shared_axes: the axes along which to share learnable parameters for the activation function. For example, if the incoming feature maps are from a 2D convolution with output shape (batch, height, width, channels), and you wish to share parameters across space so that each filter only has one set of parameters, set shared_axes=[1, 2].

beckstev commented 5 years ago

See https://github.com/beckstev/MachineLearningSeminar/pull/22