Open shimacos37 opened 5 years ago
@shimacos37 Have you tried adding any activation functions? In the Apple's CVPR paper there is no mention of activation functions so I guess this is why activation is None here. However I cannot duplicate the Apple's performance using the code. Most importantly I find the scale of losses are way off compared to Apple's experiment. In Apple's ML journal on this issue, they show the scale of losses is below 3. Bu my experiment has losses around 100, which is similar to what @carpedm20 shows here. Maybe the activation function is a missing part here.
In layers.py
and in model.py
Activation is None in most convolution layers. Is this OK? I think that gradients do not propagate properly.