-
model = stable_whisper.load_model('small')
result = model.transcribe(file)
result.to_srt_vtt('audio.vtt', False, True)
for caption in webvtt.read('audio.vtt'):
print(caption.start +" "+caption…
-
### Is your feature request related to a problem?
To hear an audio file in a public space I need to have headphones or I have to wait until I can hear it.
### Describe the solution you'd like
I wou…
-
This is something I have had in research for a while - presently I have code for macOS, but it should be possible on iOS also. Needs further research for Android and/or Windows
-
I am using whisperx for inference (which is built upon faster-whisper).
I have finetuned large-v3 model on 1k hours of domain-specific data. When I run standard inference the results are ok. Finetu…
-
Goal is to have transcription be done for .mp3 file into text file.
We want to see what transcription solution is best so that the process can be added into our automation chain
The idea is to integ…
-
Hello!
I've been using the WhisperX large-v2 model in English on a project to transcribe vocals taken from songs, which I derive using source separation with spleeter. If it matters, I've been runn…
-
Since #856 got merged, I was wondering if we can have sending multiple files in one go into faster-whisper, something like:
```py
from faster_whisper import WhisperModel, BatchedInferencePipeline
…
-
I think its due to my GPU..
Below are my specs
To create a public link, set `share=True` in `launch()`.
No language specified, language will be first be detected for each audio file (increase…
-
**Is your feature request related to a problem? Please describe.**
I find it challenging when I need to manually transcribe audio content. Whether it’s interviews, meetings, or recorded conversation…
-
I've been using faster-whisper-server via Docker for weeks with no issues with my transcription script on Ubuntu, but suddenly the server is just broken.
I get this error, whenever I try to transcr…