XiaoMi / mobile-ai-bench

Benchmarking Neural Network Inference on Mobile Devices
Apache License 2.0
353 stars 57 forks source link

SNPE problem about docs #28

Closed ysh329 closed 5 years ago

ysh329 commented 5 years ago

我看SNPE也支持CPU的定点,其中文档有这么说:

Running CPU Fixed Point Runtime: The CPU Fixed Point Runtime requires a quantized DLC and cannot convert a non-quantized DLC automatically. The quantization parameters in the DLC will be used for each output layer unless the layer is constrained to use the same input and output quantization parameters for speed and accuracy.

其中,这句的后半句,是啥意思,没看懂:The quantization parameters in the DLC will be used for each output layer unless the layer is constrained to use the same input and output quantization parameters for speed and accuracy.

  1. 这里output layer指的是最后的输出层,还是每层的输出?
  2. 后半句没看懂: the layer is constrained to use the same input and output quantization parameters for speed and accuracy.
ysh329 commented 5 years ago

这句话也不太明白:

As well, further optimizations present on the GPU/DSP may cause layer times to be mis-attributed, in the case of neuron conv-neuron or fc-neuron pairs. 没看懂

lee-bin commented 5 years ago

请参考https://github.com/XiaoMi/mobile-ai-bench/issues/31#issuecomment-477106273