issues
search
Vaibhavs10
/
insanely-fast-whisper
Apache License 2.0
6.92k
stars
505
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Use as python lib and release format.
#184
jzju
opened
4 months ago
1
Doesn't detect CUDA
#183
dgoryeo
opened
4 months ago
3
Missing 0.0.13 on pipx?
#182
jacksteamdev
closed
4 months ago
1
Normalize CLI Argument Format for Consistency
#181
felixcarmona
closed
3 months ago
2
Add Support and Enforce Constraints on Speaker Count Parameters
#180
felixcarmona
closed
4 months ago
1
Add VTT and TXT formats to output converter
#179
mjgiarlo
closed
4 months ago
1
torch_dtype only for torch.float16?
#178
yumianhuli1
opened
4 months ago
0
FFMPEG is installed but it gives error showing its not.
#177
Brob3r
closed
5 months ago
1
Add Script to Convert output.json to .srt Format
#176
Orion-Zheng
closed
4 months ago
0
Random and inconsistent transcribe
#175
Auth0rM0rgan
closed
4 months ago
4
CLI Feature requests: 1) Output .srt files, 2) Sequentially process all audio files in directory
#174
ch826
opened
5 months ago
1
언어를 변경 할 수 있나요?
#173
souk0721
closed
5 months ago
1
pipx install fails on Mac OS Sonoma
#172
boxabirds
opened
5 months ago
3
Word Timestamps
#171
Matheusadler
closed
5 months ago
1
Fixes bad version of insanely-fast-whisper being installed on python 3.11.X
#170
zackees
closed
5 months ago
1
Updated type in FAQs for `--device-id mps`
#169
paulmeller
closed
5 months ago
0
Is there an api to get the transcription progress?
#168
yumianhuli1
opened
5 months ago
0
How is the accuracy and memory usage as compared to Faster Whisper?
#167
bakermanbrian
opened
5 months ago
3
Make it possible to install on Python 3.11.4 and later
#166
sergray
closed
5 months ago
3
Issue on macOS Ventura - ProductVersion: 13.6.3 - BuildVersion: 22G436?
#165
yvervoort
opened
5 months ago
2
In the benchmark table, what does the 'batching [24]' refer to?
#164
SoundingSilence
opened
5 months ago
1
Get !!!!! in output.json file
#163
kamil6x
opened
5 months ago
0
Cuda Out of Memory
#162
hatimkh20
opened
5 months ago
6
How to make model use 2 GPU?
#161
hatimkh20
opened
5 months ago
0
Remove the model from VRAM
#160
yumianhuli1
opened
5 months ago
0
Make timestamp more accurate
#159
yumianhuli1
closed
5 months ago
0
About "condition_on_previous_text"
#158
yumianhuli1
closed
5 months ago
2
Update README.
#157
Vaibhavs10
closed
5 months ago
0
Way to generate output scores from the pipeline?
#156
sujitv19196
closed
5 months ago
1
Missing option to provide a prompt argument
#155
hlevring
closed
5 months ago
1
Add base64 inline audio
#154
kshmir
opened
6 months ago
2
Trying to run insanely fast whisper on CPU -
#153
Maldoror1900
closed
5 months ago
2
Change pyannote settings using Insanely-Fast-Whisper?
#152
Maldoror1900
opened
6 months ago
4
ValueError: Transformers now supports natively BetterTransformer optimizations (torch.nn.functional.scaled_dot_product_attention) for the model type whisper. Please upgrade to transformers>=4.36 and torch>=2.1.1 to use it
#151
MiningIrving
closed
5 months ago
8
Add CPU device-id support
#150
Tradunsky
opened
6 months ago
2
Does this project support other models of whisper base tiny
#149
MiningIrving
closed
5 months ago
3
I wonder how many work did this project do?
#148
dyyzhmm
closed
5 months ago
1
timestamps are not as accurate as other Whisper implementations
#147
ronyfadel
opened
6 months ago
2
Error in Notebook: infer_faster_whisper_large_v2.ipynb
#146
riffov
closed
5 months ago
1
Keep getting an error that torch is not compiled with CUDA, why?
#145
hawkeyecz
closed
6 months ago
1
added workaround to readme in case pipx cant parse the python version
#144
rklasen
closed
6 months ago
0
Can't install versions > 0.0.8 due to wrong python version even when Python 3.11.6 is installed
#143
rklasen
closed
6 months ago
2
Ascii codec cant decode byte
#142
asusdisciple
closed
6 months ago
1
You are attempting to use Flash Attention 2.0 with a model not initialized on GPU. Make sure to move the model to GPU after initializing it on CPU with `model.to('cuda')
#141
asusdisciple
closed
6 months ago
1
Cuda out of Memory Error
#140
omarsiddiqi224
closed
6 months ago
4
Better Diarization pipeline
#139
omarsiddiqi224
opened
6 months ago
6
Timestamp is not precise enough.
#138
genicsoft
closed
6 months ago
1
Updated to newer usage of flash attn 2, getting rid of warning message
#137
oliverwehrens
closed
6 months ago
0
Update non-cli code snippet
#136
sanchit-gandhi
closed
6 months ago
0
pipx install failed on BetterTransformer
#135
Utopiah
closed
6 months ago
6
Previous
Next