EderSantana / seya

Bringing up some extra Cosmo to Keras.
Other
377 stars 103 forks source link

Reason for not activating after convolutions in locnet #18

Closed antonmbk closed 8 years ago

antonmbk commented 8 years ago

In the spatial transformer example: is there any reason in particular why there are no activations after the convolutions in the locnet?

EderSantana commented 8 years ago

the only reason is to let the output to be any real value like suggested by the original paper. You may try to change it and force the locnet to have bounded outputs and possibly limit its focus or attention properties. You have have to change the definitions of your locnet.

antonmbk commented 8 years ago

I agree, however my thought would be to let the net do feature activations to discover meaningful features in the image, and then at the end take a final linear transform (with no activation). Thoughts on this?

EderSantana commented 8 years ago

You can modify the network. I tried some and lots didn't work well. I left that one because it worked out well.