It's now possible to correctly extract the weights from Caffe's PReLU layers and have them ready to be loaded onto a Keras model.
Caffe stores one parameter for each neuron in PReLU layers, and it's used for every activation in that neuron. Keras, instead, has one parameter for each activation.
So I extract one neuron's parameter and copy it once for each activation.
As result, we have that all activations of the same neuron have the same parameter value.
Also, unknow layer types eventually encountered will be printed at the end of computation if flag --verbose is set.
It's now possible to correctly extract the weights from Caffe's PReLU layers and have them ready to be loaded onto a Keras model.
Caffe stores one parameter for each neuron in PReLU layers, and it's used for every activation in that neuron. Keras, instead, has one parameter for each activation. So I extract one neuron's parameter and copy it once for each activation. As result, we have that all activations of the same neuron have the same parameter value.
Also, unknow layer types eventually encountered will be printed at the end of computation if flag --verbose is set.