linto-ai / whisper-timestamped

Multilingual Automatic Speech Recognition with word-level timestamps and confidence
GNU Affero General Public License v3.0
2.01k stars 156 forks source link

Suggestion: Print timestamped words on-the-fly with option verbose=True #18

Open vedran opened 1 year ago

vedran commented 1 year ago

Question: Does this algorithm fundamentally require the entire whisper transcription to be complete before processing, or could it be modified to output segments during processing, like whisper's -v verbose output? For example, if I had a 1-hour recording, could I modify whisper-timestamped to produce the word timestamps in "real time", segment by segment, rather than waiting until the whole 1-hour file is transcribed? I'm happy to try to do it myself, but I just wanted to make sure there's no fundamental reason it can't be done. Would also appreciate any advice for where you would recommend I make the changes. Thanks!

P.S. thanks for sharing this repo, amazing work!

Jeronymous commented 1 year ago

Thanks @vedran for this relevant suggestion and your wish to contribute. Indeed I haven't spent effort in the "verbose" option so far.

There are currently two ways of getting word timestamps:

  1. doing it on the fly while whisper is decoding (which is only possible with "greedy" decoding)
  2. letting whisper decode everything and then aligning segment by segment by running inference again (which is required when whisper do beam search, temperature fallback...)

(2 corresponds to options --accurate and --naive of the CLI)

With (1), it would be easy to do the verbose "on the fly" (what you call "real time"). I can do it. Just I need to decide whether or not to disable whisper verbose, i.e. to print the timestamped segment or not before timestamped words are printed. Do you have an opinion on this?

With (2), printing "on the fly" is not possible. The real solution would be to fork whisper and add the word alignment in it. I think I will end up with this solution in the end, for several reasons.

Jeronymous commented 1 year ago

I looked into this more carefully. Actually it's super tricky to implement "on-the-fly" verbose for (1), because when hooking whisper transcription, we don't know the position of the current "30 sec chunk of audio" being decoded (or it's hard to recover... maybe possible).

I improved a bit things around the verbose option. Well, choosing not to print the segments returned by whisper (to avoid confusion between segments and words). Not sure it improves things for your use case...

So again, forking whisper will solve many things including this issue...

vedran commented 1 year ago

Thanks for that! I will explore forking whisper as well. For my use case I want to see how fast one can push the performance of realtime processing for a transcription service. e.g. uploading a video file in chunks, passing those chunks into whisper, and returning them back to the client very quickly. For a proof of concept I just fed the whole file into your whisper version here, and parsed the whisper's verbose output, and skipped over your library's verbose output.