Open adsk2050 opened 1 year ago
hey were you able to find a solution for speeding up the process ?
i have been facing similar issues, for sentence that has 47 words the latency is around 600ms, and i need the model to perform inference on text around 2k words at least with 300-600ms latency.
the following is how my code looks like for yielding minimal inference latency.
translit = XlitEngine(beam_width=1, rescore=False, model_type="transformer", src_script_type = "indic")
@yashmadhani97 @GokulNC hey guys could you please help me out on this ?
currently looking for transliterating docs that contain millions of sentences. (hence inference latency is critical for me)
Hello
It is taking a lot of time for doing transliteration on a dataset. What is the time complexity of this library? Is there any way to get this done faster? I don't have a GPU so any other way it can be sped up?