Open BBC-Esq opened 2 months ago
@BBC-Esq, are you talking about batch decoding?
If whisper.cpp
supports it, then I believe it will be supported here as well.
@BBC-Esq, are you talking about batch decoding? If
whisper.cpp
supports it, then I believe it will be supported here as well.
I think he means batch prepping?
Edit:
Nope, batch transcribing!
cough
import os
from pywhispercpp.model import Model
import multiprocessing
from glob import glob
files = [f for f in glob("*") if os.path.isfile(f) and not f.endswith((".py"))]
def transcribeFile(file, queue):
model = Model("base")
segments = model.transcribe(file)
queue.put([file, segments])
return True
if __name__ == "__main__":
queue = multiprocessing.Queue()
processes = []
for file in files:
process = multiprocessing.Process(target=transcribeFile, args=(file, queue))
processes.append(process)
for process in processes:
process.start()
for transcriptions in iter(queue.get, None):
print(transcriptions)
@BBC-Esq @abdeladim-s here's some simple code to batch process with multiple independent whisper instances to ensure context is not maintained under any circumstances between whisper instances.
It doesn't do them in parallel as of now due to calling join, but that's fine. It's still batching it. I'll clean this up later.
Cleaned it up. Fixed it running in parallel. And oh boy is it a CPU killer.
cough
import os from pywhispercpp.model import Model import multiprocessing from glob import glob files = [f for f in glob("*") if os.path.isfile(f) and not f.endswith((".py"))] def transcribeFile(file, queue): model = Model("base") segments = model.transcribe(file) queue.put([file, segments]) return True if __name__ == "__main__": queue = multiprocessing.Queue() processes = [] for file in files: process = multiprocessing.Process(target=transcribeFile, args=(file, queue)) processes.append(process) for process in processes: process.start() for transcriptions in iter(queue.get, None): print(transcriptions)
@BBC-Esq @abdeladim-s here's some simple code to batch process with multiple independent whisper instances to ensure context is not maintained under any circumstances between whisper instances.
~It doesn't do them in parallel as of now due to calling join, but that's fine. It's still batching it. I'll clean this up later.~
Cleaned it up. Fixed it running in parallel. And oh boy is it a CPU killer.
So a quick heads up, it is painfully slow to do this in parallel. Like dog slow and the more files you throw at it, the slower it gets. But this is just POC code. There's room for improvement such as batching based on file length, file size, core counts, etc.
I'll see if I can beat a few optimizations out of this.
Edit:
I completely forgot that with multiprocessing, queues must be emptied before the main process can finish as they hold open pipes.
while not queue.empty():
print(queue.get())
Quick fix over iter. I forgot iterating over a queue is non destructive while calling get is destructive
import os
from pywhispercpp.model import Model
import multiprocessing
from glob import glob
import asyncio
files = [f for f in glob("*") if os.path.isfile(f) and not f.endswith((".py"))]
def transcribeFile(file, queue):
model = Model("base")
segments = model.transcribe(file)
queue.put([file, segments])
if __name__ == "__main__":
queue = multiprocessing.Queue()
processes = []
for file in files:
process = multiprocessing.Process(target=transcribeFile, args=(file, queue))
processes.append(process)
process.start()
for process in processes:
process.join()
while not queue.empty():
print(queue.get())
This is where I am at. It can queue up lots of files to do in parallel. But there's no limits on how many, that needs improvement. I also need to make it accept adding new things to its queues.
If you're after serial batch transcriptions:
from pywhispercpp.model import Model
import os
from glob import glob
if __name__ == "__main__":
files = [file for file in glob("*") if os.path.isfile(file) and not file.endswith((".py")) and not file.endswith((".cfg")) and not file.endswith(".txt")]
for file in files:
model = Model("base")
segments = model.transcribe(file)
with open(f"{file}-transcription.txt", "w") as f:
for segment in segments:
f.write(segment.text)
@UsernamesLame, That's multi-processing. The scripts look work great :+1:
Unfortunately, as @abdeladim-s knows, I can't get pywhispercpp
to even install correctly...
@UsernamesLame, That's multi-processing.
The scripts look work great :+1:
It's "batch" processing 😅
Unfortunately, as @abdeladim-s knows, I can't get
pywhispercpp
to even install correctly...
Dump logs. Let's get this working.
Logs dumped and now I'm flushing the toilet. 😉 jk Won't have time today as I'm working on the benchmarking repo for a bit...need to get an appropriate dataset and then learn/use the jiwer
library? lol
See here...start thinking about true batching.. 😉
https://github.com/shashikg/WhisperS2T/issues/33