Open seanshpark opened 4 years ago
Related #3457
ArgMax, ArgMax_U :
luci_eval_verifier.py
FullyConnected_002 :
LeakyRelu_000 :
LocalResponseNormalization_000 :
~- because Param not passed, output value not match.~
SpaceToDepth_000 :
~- Param not passed.~
FullyConnected_U8, Softmax_U8 :
ELU, IF :
Split, Unpack :
luci_eval_verifier.py
DepthwiseConv2D_U8_001 :
ValueError: QuantizationParam has 4 scale values (only 1 is supported).
LeakyRelu_000 : output value not match. TFlite mplementation change in 1.15
ELU, IF : Tensorflow v1.13 version not supported. ELU added in 1.14, If add in 1.15
DepthwiseConv2D_U8_001 : Channel wise quantize example. current TFlite Error ValueError: QuantizationParam has 4 scale values (only 1 is supported).
FYI, upgrading TF to 2.3.0-rc0 can resolve this issue, but the test will still fail because luci-interpreter currently does not support per-channel quantized DepthwiseConv2D kernel.
LeakyRelu_000 : output value not match. TFlite mplementation change in 1.15
ELU, IF : Tensorflow v1.13 version not supported. ELU added in 1.14, If add in 1.15
3561 may help to resolve the above issues.
Everything works okay but, Mean
U8 kernel have an error around 1.(tf2.1.0 vs tf2.3.0-rc0)
This comes from TFlite::round
, TFlite::round
, TFlite::round
and std::round
,std::min
,std::max
.(only difference in code)
DepthwiseConv2D_U8_001 : Channel wise quantize example. current TFlite Error ValueError: QuantizationParam has 4 scale values (only 1 is supported).
FYI, upgrading TF to 2.3.0-rc0 can resolve this issue, but the test will still fail because luci-interpreter currently does not support per-channel quantized DepthwiseConv2D kernel.
In TFlite V2.3.0-rc0, only Channel Wise Quantized input Type is int8
or int16
.
https://github.com/tensorflow/tensorflow/blob/99fea8da0d98fb271b60b58cfa5755f2bd430079/tensorflow/lite/kernels/kernel_util.cc#L71
In TFlite V2.3.0-rc0, only Channel Wise Quantized input Type is int8 or int16. https://github.com/tensorflow/tensorflow/blob/99fea8da0d98fb271b60b58cfa5755f2bd430079/tensorflow/lite/kernels/kernel_util.cc#L71
We do not have a plan to bring the kernels for uint8 channel-wise quantized models, so let's exclude DetpwiseConv2D_U8_001
from the test list. It would be better if you leave some comments in test.lst
for other people, for example
#addeval(DepthwiseConv2D_U8_001) # UINT8 quantized model, not supported by luci-interpreter
So far, kernels belows are not test in luci-value-test
Result value not matching from version mismatching
- Mean_U8_000
Kernel Function spec not matching
- DepthwiseConv2D_U8_001
Kernel not supported yet
- FullyConnected_U8_000 -> hybrid example. so, this is need to be modified.
- Softmax_U8_000
only version up : #3785 Done.
Uint8 with CWQ kernel addition : Will discuss later on......
and More Uint8 LWQ kernel : Softmax
will be ready, but FullyConnected
is hybrid. so new res
will be added and current 000 is comment out.
Close. Uint8 with CWQ kernel addition
will be done later.
These items failed where I could see kernel files in
compiler/luci-interpreter/src/kernels
folderFor quantized
Kernel not supported yet