Closed victorygogogo closed 7 years ago
Why do you use nnp_convolution_output
? It is optimized for training and large batch size. On mobile use-cases, typical batch size is 1, and nnp_convolution_inference
handles this case much better.
I use nnp_convolution_inference ,it is the same problem.
I test it on msm8953.
NNPACK caffe_nnp_convolution_output is not fast.
is there some switch in NNPACK to speed up ?
I want to use neon.