nyadla-sys / whisper.tflite

Optimized OpenAI's Whisper TFLite Port for Efficient Offline Inference on Edge Devices
MIT License
158 stars 31 forks source link

Could you share how to get one DTLN Tflite model? #39

Closed saraphinesER closed 1 month ago

saraphinesER commented 2 months ago

Hello, @nyadla-sys , I noticed the DTLN tflite model you have is one full integer quantization TFLite model (whisper.tflite/models/dtln_quantized.tflite), while as I knew the breizhn/DTLN separate into 2 TFLite models. (convert_weights_to_tf_lite.py) I am wondering how to get them into one as yours. Could you please share the script you used to get your dtln_quantized.tflite? That will be very helpful to me. I would really appreciate your help if you could share. Thanks.

saraphinesER commented 1 month ago

Thanks for nyadla-sys kindness to correct my misunderstanding of the quantized DTLN Tflite model. It's only a TFLite conversion from stage1, which is 'model_1', not a conversion from combination of 2 models.

nyadla-sys commented 1 month ago

Thanks for nyadla-sys kindness to correct my misunderstanding of the quantized DTLN Tflite model. It's only a TFLite conversion from stage1, which is 'model_1', not a conversion from combination of 2 models.

Correct

nyadla-sys commented 1 month ago

if you could integrate dtln with whisper tflite model,please share the working pipeline thanks