google-research / google-research

Google Research
https://research.google
Apache License 2.0
34.19k stars 7.89k forks source link

SVDF layer implementation incompatible with SVDF operator from TFLite #1950

Open VictorDominguite opened 8 months ago

VictorDominguite commented 8 months ago

The current implementation of the SVDF layer doesn’t get fused as an SVDF operator when converted to TFLite. I was wondering if there is something that can be done to this implementation so that it gets recognized as an SVDF op by TFLite.

kismeter commented 8 months ago

@VictorDominguite I have a similar question as well. Do you have any findings or progress to share?

VictorDominguite commented 8 months ago

@VictorDominguite I have a similar question as well. Do you have any findings or progress to share?

Hi @kismeter, unfortunately, still no progress.

rybakov commented 7 months ago

In kws_streaming/layers including svdf I avoid using any custom TFLite operations including fused TFLite SVDF. There are several reasons for that: 1 Benchmarks of svdf was not that bad in comparison to fused SVDF. 2 Fused SVDF works only with TFLite, but we need to run SVDF on TPU, GPU, etc. svdf does not use any special op and can be executed/compiled for any hardware.

kismeter commented 7 months ago

@rybakov so you mean in kws_streaming/layers including svdf will not be fused to TFLite SVDF ? is it possible to fuse TFLite or TFLite Micro SVDF?

kismeter commented 2 months ago

In kws_streaming/layers including svdf I avoid using any custom TFLite operations including fused TFLite SVDF. There are several reasons for that: 1 Benchmarks of svdf was not that bad in comparison to fused SVDF. 2 Fused SVDF works only with TFLite, but we need to run SVDF on TPU, GPU, etc. svdf does not use any special op and can be executed/compiled for any hardware.

@rybakov could you have a look at this running on MCU, when using the quantize_opt_for_size_tflite_stream_state_external/stream_state_external.tflite met the issue "Input type: FLOAT32 with filter type : INT8 not supported." when I try to change saved_model_to_tflite to export the tflite model, the inference_input_type can't change to tf.int16 or tf.int8

ctwillson commented 2 weeks ago

In kws_streaming/layers including svdf I avoid using any custom TFLite operations including fused TFLite SVDF. There are several reasons for that: 1 Benchmarks of svdf was not that bad in comparison to fused SVDF. 2 Fused SVDF works only with TFLite, but we need to run SVDF on TPU, GPU, etc. svdf does not use any special op and can be executed/compiled for any hardware.

@rybakov could you have a look at this running on MCU, when using the quantize_opt_for_size_tflite_stream_state_external/stream_state_external.tflite met the issue "Input type: FLOAT32 with filter type : INT8 not supported." when I try to change saved_model_to_tflite to export the tflite model, the inference_input_type can't change to tf.int16 or tf.int8

Do you solve it?Any idea for this?

kismeter commented 2 weeks ago

In kws_streaming/layers including svdf I avoid using any custom TFLite operations including fused TFLite SVDF. There are several reasons for that: 1 Benchmarks of svdf was not that bad in comparison to fused SVDF. 2 Fused SVDF works only with TFLite, but we need to run SVDF on TPU, GPU, etc. svdf does not use any special op and can be executed/compiled for any hardware.

@rybakov could you have a look at this running on MCU, when using the quantize_opt_for_size_tflite_stream_state_external/stream_state_external.tflite met the issue "Input type: FLOAT32 with filter type : INT8 not supported." when I try to change saved_model_to_tflite to export the tflite model, the inference_input_type can't change to tf.int16 or tf.int8

Do you solve it?Any idea for this?

NO, I don't solve it.

ctwillson commented 2 weeks ago

@kismeter Maybe u need to implement the "representative_dataset" function,and then u can set inference_input_type change to tf.int8. BTW,can u provide the requirements.txt.I cannot run the lastest kws code on my tensorflow version

kismeter commented 2 weeks ago

@kismeter Maybe u need to implement the "representative_dataset" function,and then u can set inference_input_type change to tf.int8. BTW,can u provide the requirements.txt.I cannot run the lastest kws code on my tensorflow version

for sure I've implemented the "representative_dataset" function, but that still not solve the issue. you should use docker for specific tensorflow version, the latest TF can't run kws

ctwillson commented 2 weeks ago

@kismeter Maybe u need to implement the "representative_dataset" function,and then u can set inference_input_type change to tf.int8. BTW,can u provide the requirements.txt.I cannot run the lastest kws code on my tensorflow version

for sure I've implemented the "representative_dataset" function, but that still not solve the issue. you should use docker for specific tensorflow version, the latest TF can't run kws

It appears that somebody has already quantized the model to fully int8 model from this issue. could u provide the docker which can run kws. thanks