Samsung / ONE

On-device Neural Engine
Other
427 stars 152 forks source link

[luci-value-tests] failed items for existing kernels #3458

Open seanshpark opened 4 years ago

seanshpark commented 4 years ago

These items failed where I could see kernel files in compiler/luci-interpreter/src/kernels folder

For quantized

Kernel not supported yet

seanshpark commented 4 years ago

Related #3457

struss commented 4 years ago

ArgMax, ArgMax_U :

FullyConnected_002 :

LeakyRelu_000 :

LocalResponseNormalization_000 :

~- because Param not passed, output value not match.~

SpaceToDepth_000 :

~- Param not passed.~

FullyConnected_U8, Softmax_U8 :

ELU, IF :

Split, Unpack :

DepthwiseConv2D_U8_001 :

jinevening commented 4 years ago

LeakyRelu_000 : output value not match. TFlite mplementation change in 1.15

ELU, IF : Tensorflow v1.13 version not supported. ELU added in 1.14, If add in 1.15

3561 may help to resolve the above issues.

DepthwiseConv2D_U8_001 : Channel wise quantize example. current TFlite Error ValueError: QuantizationParam has 4 scale values (only 1 is supported).

FYI, upgrading TF to 2.3.0-rc0 can resolve this issue, but the test will still fail because luci-interpreter currently does not support per-channel quantized DepthwiseConv2D kernel.

struss commented 4 years ago

LeakyRelu_000 : output value not match. TFlite mplementation change in 1.15

ELU, IF : Tensorflow v1.13 version not supported. ELU added in 1.14, If add in 1.15

3561 may help to resolve the above issues.

Everything works okay but, Mean U8 kernel have an error around 1.(tf2.1.0 vs tf2.3.0-rc0) This comes from TFlite::round, TFlite::round, TFlite::round and std::round,std::min,std::max.(only difference in code)

struss commented 4 years ago

DepthwiseConv2D_U8_001 : Channel wise quantize example. current TFlite Error ValueError: QuantizationParam has 4 scale values (only 1 is supported).

FYI, upgrading TF to 2.3.0-rc0 can resolve this issue, but the test will still fail because luci-interpreter currently does not support per-channel quantized DepthwiseConv2D kernel.

In TFlite V2.3.0-rc0, only Channel Wise Quantized input Type is int8 or int16. https://github.com/tensorflow/tensorflow/blob/99fea8da0d98fb271b60b58cfa5755f2bd430079/tensorflow/lite/kernels/kernel_util.cc#L71

jinevening commented 4 years ago

In TFlite V2.3.0-rc0, only Channel Wise Quantized input Type is int8 or int16. https://github.com/tensorflow/tensorflow/blob/99fea8da0d98fb271b60b58cfa5755f2bd430079/tensorflow/lite/kernels/kernel_util.cc#L71

We do not have a plan to bring the kernels for uint8 channel-wise quantized models, so let's exclude DetpwiseConv2D_U8_001 from the test list. It would be better if you leave some comments in test.lst for other people, for example

#addeval(DepthwiseConv2D_U8_001) # UINT8 quantized model, not supported by luci-interpreter
struss commented 4 years ago

So far, kernels belows are not test in luci-value-test

Result value not matching from version mismatching

- Mean_U8_000

Kernel Function spec not matching

- DepthwiseConv2D_U8_001

Kernel not supported yet

- FullyConnected_U8_000 -> hybrid example. so, this is need to be modified.
- Softmax_U8_000
struss commented 4 years ago

only version up : #3785 Done. Uint8 with CWQ kernel addition : Will discuss later on...... and More Uint8 LWQ kernel : Softmax will be ready, but FullyConnected is hybrid. so new res will be added and current 000 is comment out.

struss commented 3 years ago

Close. Uint8 with CWQ kernel addition will be done later.