Expect CPU training to be 25-40x slower than a single high end gpu. Otherwise I imagine it's just a case of using tensorflow instead of tensorflow-gpu in pip. Keep in mind that speech recognition models are usually very big, If it say takes ~20days to train this model on a GPU expect over a year of training time at minimum on a CPU; for this reason it's generally not something anyone considers seriously.
However running your input data through a trained network on a CPU is generally feasible, but may take some seconds to run a sample vs real-time.
Expect CPU training to be 25-40x slower than a single high end gpu. Otherwise I imagine it's just a case of using
tensorflow
instead oftensorflow-gpu
in pip. Keep in mind that speech recognition models are usually very big, If it say takes ~20days to train this model on a GPU expect over a year of training time at minimum on a CPU; for this reason it's generally not something anyone considers seriously.However running your input data through a trained network on a CPU is generally feasible, but may take some seconds to run a sample vs real-time.