Closed prajwaljpj closed 3 years ago
Fixed! Was training the model with the cloned package. Had made some changes in the configs. But was converting model to JIT using the PIP package 'deep-phonemizer'. Although inference of custom model works on PIP package, the torchscript conversion does not work.
Now, I'm facing another issue over the JIT model. During inferencing, the first inference takes 0.64 seconds, and the consecutive inference takes 0.09 to 0.1 seconds.
Any suggestions?
IMO thats a known issue: https://discuss.pytorch.org/t/speed-of-first-pass-is-very-slow/64575/7
Interesting! Thank you for your feedback. Closing.
I was trying to convert my trained model for Hindi to a JIT compatble model using the method provided on the README Facing an error at the dropout layer.
In eval mode:
Any suggestions would be greatly appreciated!