-
### Describe the feature/enhancement
Transcription/Subtitle support
===
### Summary
Add initial support for transcriptions. Apple now supports transcriptions in podcasts.
Some audiobook f…
mfcar updated
4 months ago
-
model = stable_whisper.load_model('small')
result = model.transcribe(file)
result.to_srt_vtt('audio.vtt', False, True)
for caption in webvtt.read('audio.vtt'):
print(caption.start +" "+caption…
-
### Describe the bug
Accessing audio dataset value throws `Format not recognised error`
### Steps to reproduce the bug
**code:**
```py
from datasets import load_dataset
dataset = load_da…
-
If the user closes the streamlit app window after the YouTube video is downloaded but before the Whisper transcription is complete, the audio data will not be deleted from the download file. This prod…
-
## **PLIP (Plone Improvement Proposal)**
## **Responsible Persons**
### **Proposer: Jefferson Bledsoe**
### **Seconder:**
## **Abstract**
This PLIP aims to deliver a new media uploading…
-
Podcasts are usually conversations so voice recognition is needed to identify the *author* and extract question and answer pairs from the transcript. Similar to video ingestion.
-
Hi, thanks for the great library!
I am hoping to use a large number of CPUs for audio transcription. However, I've been benchmarking the performance of `faster-whisper` and am getting nothing close…
-
Hello!
I've been using the WhisperX large-v2 model in English on a project to transcribe vocals taken from songs, which I derive using source separation with spleeter. If it matters, I've been runn…
-
I successfully made your pipeline example run on my Mac. I did not expect to meet an assistant, but understand a bit more now about the intention of this project.
I would like to build a pipeline …
-
I'm getting poor transcription results using whisperx, specifically I am sometimes not getting any transcription out of some short videos, whereas OpenAI's official whisper model transcribes them corr…
reasv updated
2 months ago