issues
search
Vaibhavs10
/
insanely-fast-whisper
Apache License 2.0
7.79k
stars
547
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Add base64 inline audio
#154
kshmir
opened
11 months ago
2
Trying to run insanely fast whisper on CPU -
#153
Maldoror1900
closed
10 months ago
3
Change pyannote settings using Insanely-Fast-Whisper?
#152
Maldoror1900
opened
11 months ago
5
ValueError: Transformers now supports natively BetterTransformer optimizations (torch.nn.functional.scaled_dot_product_attention) for the model type whisper. Please upgrade to transformers>=4.36 and torch>=2.1.1 to use it
#151
MiningIrving
closed
10 months ago
8
Add CPU device-id support
#150
Tradunsky
opened
11 months ago
2
Does this project support other models of whisper base tiny
#149
MiningIrving
closed
10 months ago
3
I wonder how many work did this project do?
#148
dyyzhmm
closed
10 months ago
1
timestamps are not as accurate as other Whisper implementations
#147
ronyfadel
opened
11 months ago
2
Error in Notebook: infer_faster_whisper_large_v2.ipynb
#146
riffov
closed
10 months ago
1
Keep getting an error that torch is not compiled with CUDA, why?
#145
hawkeyecz
closed
11 months ago
1
added workaround to readme in case pipx cant parse the python version
#144
rklasen
closed
11 months ago
0
Can't install versions > 0.0.8 due to wrong python version even when Python 3.11.6 is installed
#143
rklasen
closed
11 months ago
2
Ascii codec cant decode byte
#142
asusdisciple
closed
11 months ago
1
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')
#141
asusdisciple
closed
11 months ago
1
Cuda out of Memory Error
#140
omarsiddiqi224
closed
11 months ago
4
Better Diarization pipeline
#139
omarsiddiqi224
opened
11 months ago
6
Timestamp is not precise enough.
#138
genicsoft
closed
11 months ago
1
Updated to newer usage of flash attn 2, getting rid of warning message
#137
oliverwehrens
closed
11 months ago
0
Update non-cli code snippet
#136
sanchit-gandhi
closed
11 months ago
0
pipx install failed on BetterTransformer
#135
Utopiah
closed
11 months ago
6
insanely_fast_whisper_colab.ipynb doesn't work
#134
danielKlasss
closed
11 months ago
3
The model type whisper is not supported to be used with BetterTransformer
#133
afiaka87
closed
11 months ago
6
Fixed edge-case runtime error in diarization pipeline
#132
benjaminjackson
closed
11 months ago
2
Detect language
#131
rusty-ai
closed
11 months ago
1
Adjust sentence length
#130
gerardosabetta
opened
11 months ago
4
SRT output
#129
ghost
closed
11 months ago
7
mem_param.
#128
Vaibhavs10
closed
11 months ago
0
Fix english only models.
#127
Vaibhavs10
closed
11 months ago
0
Running from source on M3 gives a torch wrong binary installed
#126
byjlw
closed
11 months ago
3
CPU based insanely fast whisper
#125
Tortoise17
closed
10 months ago
1
Torch not compiled with CUDA enabled
#124
marsonal2023
closed
11 months ago
5
Input language field acting weird in Replicate demo
#123
mikkelsvartveit
opened
11 months ago
3
add minor change to readme.md file
#122
allenma816
closed
11 months ago
0
[Question]How to translate to another specified language
#121
NathanDai
closed
11 months ago
1
[Question]Process a 2second wav to text cost around 1 minute
#120
mikeyang01
closed
11 months ago
5
Use Custom Model
#119
alexivaner
opened
11 months ago
1
Mac, install flash-attn, CUDA_HOME environment variable is not set
#118
dd0ke
closed
11 months ago
3
Unify json result and fix TypeError for None end-timestamp
#117
SKocur
closed
11 months ago
0
Attempting to use `--timestamp word` crashes after a few seconds
#116
alinsavix
closed
11 months ago
1
Hallucination / Text Addition
#115
Tejaswgupta
closed
11 months ago
3
Troubles with optimum
#114
VirtualRoyalty
closed
11 months ago
5
Add support for Nvidia optimum
#113
SKocur
opened
11 months ago
1
Streaming
#112
anunayajoshi
closed
11 months ago
1
Use Nvidia optimum to speed up inference
#111
SKocur
closed
11 months ago
1
Add low_cpu_memory argument.
#110
Vaibhavs10
closed
11 months ago
0
Remind macOS users to use device-id flag
#109
python481516
closed
11 months ago
0
Hello - Any optimisation options for apple silicon arch?
#108
DrSocket
closed
11 months ago
0
AssertionError: Torch not compiled with CUDA enabled
#107
python481516
closed
11 months ago
5
Got error on first run - Unexpected keyword argument 'use_flash_attention_2'
#106
HMDRAMS-DEV
closed
11 months ago
7
CUDA error: invalid device ordinal
#105
NathanDai
closed
11 months ago
2
Previous
Next