NVIDIA / OpenSeq2Seq

Toolkit for efficient experimentation with Speech Recognition, Text2Speech and NLP
https://nvidia.github.io/OpenSeq2Seq
Apache License 2.0
1.54k stars 369 forks source link

How can we run inference with a pb file #547

Open pratapaprasanna opened 4 years ago

pratapaprasanna commented 4 years ago

Hi all,

I have a couple of questions.

1- I could freeze my model and I have a pb file how can I use this pb file to run inference.?

2- Do we get an increase in inference speed if we freeze the graph rather than use the existing checkpoints?

3- Do we have any ways to run Inference fast in cpu? The current speed looks very slow

Thanks in advance

VictorBeraldo commented 4 years ago

@pratapaprasanna How did you run inference on CPU? Are you using CPU on speech to text task? I could really use some help... Thanks!