Closed dr-venkman closed 4 years ago
CPU backend uses cker
in compute/cker
, and it is tensorflow lite kernel porting. If current cker
is not enough, you should port from tensorflow into cker
.
Don't mind externals/tensorflow/tensorflow/lite/kernels/internal/common.h
. We don't take care of externals/tensorflow/tensorflow/lite/kernels/internal/common.h
. And the onert does not have dependence on the file.
Thanks for the replies. I will then include a separate version of QuantizationHelpers.h
under cker
namespace, which should support fixed point arithmetic. Thanks again.
I am currently supporting 8-bit quantization for cpu backends. This requires use of specific quantization functions that handle fixed-point arithmetic computations such as 32-bit multiplication and saturated division.
Currently, the utilization functions in onert are replicated at two places, which are as follows:
gemmlowp
: specifically,externals/tensorflow/tensorflow/lite/kernels/internal/common.h
compiler/mir-interpreter/src/ops/QuantizationHelpers.h
I am not sure which version of the header files I can use to implement quantization on backend. Could you please let me know along the specific build options to modify? Also, are there any plans to consolidate and implement all utility functions under one location? Thanks in advance.