hi, i want to know where i can use a faster whisper infer scirpt after seeing the PR faster whisper llm trt. i only want to transcribe using the whisper model, not whisper llm in sherpa triton. could you please direct me to the official script or documentation for the optimal and most accelerated version of the whisper model?
the log in whisper in Run with GPU (int8) doc is
decoding method: greedy_search
Elapsed seconds: 19.190 s
Real time factor (RTF): 19.190 / 6.625 = 2.897
hi, i want to know where i can use a faster whisper infer scirpt after seeing the PR
faster whisper llm trt
. i only want to transcribe using the whisper model, not whisper llm insherpa triton
. could you please direct me to the official script or documentation for the optimal and most accelerated version of the whisper model?the log in whisper in
Run with GPU (int8)
doc is