Open VictorDominguite opened 8 months ago
@VictorDominguite I have a similar question as well. Do you have any findings or progress to share?
@VictorDominguite I have a similar question as well. Do you have any findings or progress to share?
Hi @kismeter, unfortunately, still no progress.
In kws_streaming/layers including svdf I avoid using any custom TFLite operations including fused TFLite SVDF. There are several reasons for that: 1 Benchmarks of svdf was not that bad in comparison to fused SVDF. 2 Fused SVDF works only with TFLite, but we need to run SVDF on TPU, GPU, etc. svdf does not use any special op and can be executed/compiled for any hardware.
@rybakov so you mean in kws_streaming/layers including svdf will not be fused to TFLite SVDF ? is it possible to fuse TFLite or TFLite Micro SVDF?
In kws_streaming/layers including svdf I avoid using any custom TFLite operations including fused TFLite SVDF. There are several reasons for that: 1 Benchmarks of svdf was not that bad in comparison to fused SVDF. 2 Fused SVDF works only with TFLite, but we need to run SVDF on TPU, GPU, etc. svdf does not use any special op and can be executed/compiled for any hardware.
@rybakov could you have a look at this running on MCU, when using the quantize_opt_for_size_tflite_stream_state_external/stream_state_external.tflite met the issue "Input type: FLOAT32 with filter type : INT8 not supported." when I try to change saved_model_to_tflite to export the tflite model, the inference_input_type can't change to tf.int16 or tf.int8
In kws_streaming/layers including svdf I avoid using any custom TFLite operations including fused TFLite SVDF. There are several reasons for that: 1 Benchmarks of svdf was not that bad in comparison to fused SVDF. 2 Fused SVDF works only with TFLite, but we need to run SVDF on TPU, GPU, etc. svdf does not use any special op and can be executed/compiled for any hardware.
@rybakov could you have a look at this running on MCU, when using the quantize_opt_for_size_tflite_stream_state_external/stream_state_external.tflite met the issue "Input type: FLOAT32 with filter type : INT8 not supported." when I try to change saved_model_to_tflite to export the tflite model, the inference_input_type can't change to tf.int16 or tf.int8
Do you solve it?Any idea for this?
In kws_streaming/layers including svdf I avoid using any custom TFLite operations including fused TFLite SVDF. There are several reasons for that: 1 Benchmarks of svdf was not that bad in comparison to fused SVDF. 2 Fused SVDF works only with TFLite, but we need to run SVDF on TPU, GPU, etc. svdf does not use any special op and can be executed/compiled for any hardware.
@rybakov could you have a look at this running on MCU, when using the quantize_opt_for_size_tflite_stream_state_external/stream_state_external.tflite met the issue "Input type: FLOAT32 with filter type : INT8 not supported." when I try to change saved_model_to_tflite to export the tflite model, the inference_input_type can't change to tf.int16 or tf.int8
Do you solve it?Any idea for this?
NO, I don't solve it.
@kismeter Maybe u need to implement the "representative_dataset" function,and then u can set inference_input_type change to tf.int8. BTW,can u provide the requirements.txt.I cannot run the lastest kws code on my tensorflow version
@kismeter Maybe u need to implement the "representative_dataset" function,and then u can set inference_input_type change to tf.int8. BTW,can u provide the requirements.txt.I cannot run the lastest kws code on my tensorflow version
for sure I've implemented the "representative_dataset" function, but that still not solve the issue. you should use docker for specific tensorflow version, the latest TF can't run kws
@kismeter Maybe u need to implement the "representative_dataset" function,and then u can set inference_input_type change to tf.int8. BTW,can u provide the requirements.txt.I cannot run the lastest kws code on my tensorflow version
for sure I've implemented the "representative_dataset" function, but that still not solve the issue. you should use docker for specific tensorflow version, the latest TF can't run kws
It appears that somebody has already quantized the model to fully int8 model from this issue. could u provide the docker which can run kws. thanks
The current implementation of the SVDF layer doesn’t get fused as an SVDF operator when converted to TFLite. I was wondering if there is something that can be done to this implementation so that it gets recognized as an SVDF op by TFLite.