Open muskedunder opened 5 years ago
I think I found the source of error. If I replace the global-maxpool-layer with a regular maxpool-layer (with same poolsize as the global-maxpool-layer) plus a flatten-layer, it works as expected!
Hi, I think it might be a bug, but I can't replicate this issue. Both max pooling and Global max pooling are using the same max_pooling backend. So, it could be something wrong in setting up the configuration in global pooling. Would you might to provide your model for me to investigate a bit more?
Sure, what is the best way to share a .h5
-file with you?
Edit: You can find it here https://github.com/ColinNordin/depthwise-mnist-model
I followed your example
auto_test
with my own depthwise deparable CNN. After a few epochs of training my Keras model has an accuracy of 98.12% on the MNIST test set. After quantization the NNoM model has an accuracy of 12.95%. I do expect some performance drop but this is such a large drop that I rather think it is a bug. Here is the model summary: