issues
search
MiscellaneousStuff
/
openai-whisper-cpu
Improving transcription performance of OpenAI Whisper for CPU based deployment
MIT License
237
stars
19
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Help Me Fix This error
#15
Brilliancoder
opened
1 year ago
0
whisper not found error
#14
vinny-888
opened
1 year ago
1
how i do that?
#13
Skilatchi2020
opened
1 year ago
0
Model not quantized
#12
Jawad1347
opened
1 year ago
2
Does not seem to work on older CPUs
#11
qwertyuu
opened
1 year ago
1
Inference Accuracy Benchmark Figures
#10
willupowers
opened
1 year ago
0
Slower results from quantized model potentially due to warning prints
#9
Mijawel
opened
1 year ago
4
Can I save the quantized model to disk to avoid calling `torch.quantization.quantize_dynamic` each times?
#8
shy2052
opened
1 year ago
2
Punctuation is after conversion for Turkish language.
#7
hamsipower
opened
1 year ago
0
Is it possible to take advantage of the quantization while using a separate fork?
#6
petiatil
closed
1 year ago
2
how to use for non-English language?
#5
gnmarten
closed
1 year ago
1
How to implement it?
#4
stevevaius2015
closed
1 year ago
4
Question about the minimal required changes for CPU improvement
#3
albertofernandezvillan
closed
2 years ago
1
openai-whisper-cpu-docker
#2
Philipp-Sc
closed
2 years ago
0
Model Size and Inference Time do not change
#1
Philipp-Sc
closed
1 year ago
4