I trained the DeepSpeech2 model using the example scripts. With the trained checkpoints I generated a tflite model using the provided conversion script. I can successfully load and allocate the tensors but as soon as I invoke it fails and gives me Segmentation fault (core dumped).
Code for loading the model:
import tensorflow as tf
import soundfile as sf
wav, rate = sf.read("x.flac", dtype='float32')
model = tf.lite.Interpreter("test.tflite")
outputIndex = model.get_output_details()[0]['index']
model.resize_tensor_input(0, wav.shape)
model.allocate_tensors()
model.set_tensor(0, wav)
model.invoke()
output = model.get_tensor(outputIndex)
print(output)
Hi!
I trained the DeepSpeech2 model using the example scripts. With the trained checkpoints I generated a tflite model using the provided conversion script. I can successfully load and allocate the tensors but as soon as I invoke it fails and gives me Segmentation fault (core dumped).
Code for loading the model: