Closed mrocr closed 5 years ago
@SeguinBe have a look at this improved Mobilenet v2, which you can download from here. It can achieve 70% better FLOPs.
The mobileNet experiment was more about trying to add a new network and check the flexibility of a new form of the framework.
While I tried training, and it was running. In my recollection the performance was not on par with the bigger ones for my test case.
I skimmed over the link you provided and it is mainly about model compression. A compressed model can not be used for training right away. What you want is compressing the final trained model of dhSegment, and this is out of the scope of this project.
@SeguinBe @solivr The goal is to find a neural network that is capable of delivering Resnet-50's accuracy, without being over-parameterized or heavy. Thus, the idea of finding that secret sauce, that is able to run with speed & accuracy even when used on a CPU is actually possible, and have been already achieved.
Have a look at:
M2U-Net 0.55M parameters, can achieve +96% rate, it takes 577ms to 1.67s when running on CPU. https://arxiv.org/abs/1811.07738 https://github.com/laibe/M2U-Net
NAS-Unet 0.8M parameters, can achieve +98% rate. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8681706 https://github.com/tianbaochou/NasUnet
ESFNet 0.18 parameters, 85.34 IoU, 2.513 flops, 100 to 143 fps. https://arxiv.org/abs/1903.12337 https://github.com/mrluin/ESFNet-Pytorch
@solivr @SeguinBe Thank you for your hard work,
Can you merge Mobilenet v2 with master, along with adding a demo for using it. Thank you
Waiting for your reply