issues
search
Vaibhavs10
/
insanely-fast-whisper
Apache License 2.0
7.76k
stars
547
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Can I serve speechbrain trained model whisper with faster whisper?
#254
cod3r0k
opened
1 week ago
0
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')`.
#253
solarslurpi
opened
1 week ago
1
modal local path support
#252
trigsoft
opened
2 weeks ago
0
Whisper.cpp support when?
#251
vedku
opened
1 month ago
0
Subtitle file for an audio with proper formatting for .srt and Premiere pro?
#250
krigeta
closed
3 weeks ago
0
Torch not compiled with CUDA enabled
#249
Ryan147
opened
1 month ago
0
Major Misidentification Issues in Diarization
#248
andonagio
opened
1 month ago
6
Prediction failed: attempt to get argmin of an empty sequence
#247
wscourge
opened
2 months ago
0
Any way to transcribe entire directory and output as srt/vtt?
#246
snwefly
opened
2 months ago
0
flash_attention_2 on macOS?
#245
yukiarimo
opened
2 months ago
0
not working on macOS
#244
mrveritas
opened
2 months ago
3
How to keep model in high alert state for faster inference?
#243
SuperMaximus1984
opened
2 months ago
1
Can flash attention 2 be installed on Mac m2pro?
#242
dazzng
opened
2 months ago
1
Medium model produces nonsense
#241
tjongsma
opened
3 months ago
0
Performing speaker diarization without CLI?
#240
sidharthrajaram
closed
3 months ago
2
convert_output.py " 'charmap' codec can't decode byte 0x90"
#239
thebigboss9018
opened
3 months ago
0
segmentation and diarization
#238
danielbowne
opened
3 months ago
3
NotImplementedError
#237
Eli-117
closed
4 months ago
0
This is how you solve "Torch installed w/out Cuda" error
#236
gjnave
opened
4 months ago
4
Insanely Fast Whispher with TensorRT / ONNX
#235
asusdisciple
opened
4 months ago
1
aten::isin.Tensor_Tensor_out is not currently implemented for the MPS device
#234
Amandrs
opened
5 months ago
1
Incompatible with NumPy 2.0...
#233
chadacious
opened
5 months ago
3
Updated convert_output.py
#232
DavidScann
opened
5 months ago
0
--timestamp word : WhisperFlashAttention2 attention does not support output_attentions
#231
mayunchao1994
opened
5 months ago
3
Speaker diarization
#230
wanderGuy
opened
5 months ago
1
If only convert_output.py did paragraphs (for readibility)
#229
cleesmith
opened
5 months ago
1
Question/Feature: Convert Output Text Format using CLI
#228
WSHAPER
opened
6 months ago
0
No VAD?
#227
twicer-is-coder
opened
6 months ago
0
"No module named wheel" when adding flash-attn on Windows
#226
jenny923432
opened
6 months ago
1
`The operator 'aten::isin.Tensor_Tensor_out' is not currently implemented for the MPS device.` while running on macOS
#225
paulz
opened
6 months ago
9
Diarization is very slow
#224
prkumar112451
opened
6 months ago
2
💬 Add BitsAndBytesConfig Optimization Method
#223
kadirnar
closed
4 months ago
2
Get nearly double the performance gap with another docker image on the same system.
#222
lordofriver
opened
6 months ago
1
[Question] Usage for consumer hardware?
#221
Yasand123
opened
6 months ago
1
Torch not compiled with CUDA enabled
#220
welldawell
closed
4 months ago
4
NotImplementedError: The operator 'aten::isin.Tensor_Tensor_out'
#219
cyrilzakka
opened
6 months ago
3
{ Feature request} Turn off Progress bar so it can run in batch.
#218
trobinson03195
opened
6 months ago
0
where does insanely-fast-whisper store the downloaded model file?
#217
yinwei168
opened
6 months ago
3
python inference code not different from "normal" whipser?
#216
wincing2
opened
6 months ago
1
[Error] metadata-generation failed while trying to install flash-attention on google collab T4
#215
MahdeenSky
opened
7 months ago
0
Local file
#214
kamalkech
opened
7 months ago
0
pipx install insanely-fast-whisper outdated version
#213
LaansDole
opened
7 months ago
2
Could we talk in discord mp about subtitles
#212
SpeederSpeederSpeder
opened
7 months ago
0
Cuda Index Out of Bound error on GPU
#211
weichunnn
opened
7 months ago
0
Getting `Use model.to('Cuda')` when trying to use Flash Attention
#210
eburgwedel
opened
7 months ago
4
question: client sdk
#209
Tyrese-FullStackGenius
opened
7 months ago
0
Timestamps are too tight when repetition_penalty is present
#208
Brodski
opened
7 months ago
2
Can I add - initial_prompt like whisper
#207
lvjin521
opened
7 months ago
1
MPS flash attention support
#206
dreampuf
opened
7 months ago
0
Speaker Diarization
#205
Yonahfireman
opened
7 months ago
0
Next