ggerganov / whisper.cpp

Port of OpenAI's Whisper model in C/C++
MIT License
35.95k stars 3.67k forks source link

running example use too long time in linux #2248

Open li-henan opened 5 months ago

li-henan commented 5 months ago

Dear author, thanks for your code, I make and run ./main -m ggml-model-whisper-base.en.bin -f samples/jfk.wav -bs 1 but the process use about an hour, and the result is as follows, it is there any problem in installing ? Or my linux environment is wrong ? I will appreciate if you can help, thanks!

-> % ./main -m ggml-model-whisper-medium.en-q5_0.bin -f samples/jfk.wav -bs 1 whisper_init_from_file_with_params_no_state: loading model from 'ggml-model-whisper-medium.en-q5_0.bin' whisper_init_with_params_no_state: use gpu = 1 whisper_init_with_params_no_state: flash attn = 0 whisper_init_with_params_no_state: gpu_device = 0 whisper_init_with_params_no_state: dtw = 0 whisper_model_load: loading model whisper_model_load: n_vocab = 51864 whisper_model_load: n_audio_ctx = 1500 whisper_model_load: n_audio_state = 1024 whisper_model_load: n_audio_head = 16 whisper_model_load: n_audio_layer = 24 whisper_model_load: n_text_ctx = 448 whisper_model_load: n_text_state = 1024 whisper_model_load: n_text_head = 16 whisper_model_load: n_text_layer = 24 whisper_model_load: n_mels = 80 whisper_model_load: ftype = 8 whisper_model_load: qntvr = 1 whisper_model_load: type = 4 (medium) whisper_model_load: adding 1607 extra tokens whisper_model_load: n_langs = 99 whisper_model_load: CPU total size = 538.59 MB whisper_model_load: model size = 538.59 MB whisper_mel_init: n_len = 6000, n_len_org = 6000, n_mel = 80 whisper_init_state: kv self size = 150.99 MB whisper_init_state: kv cross size = 150.99 MB whisper_init_state: kv pad size = 6.29 MB whisper_init_state: compute buffer (conv) = 28.68 MB whisper_init_state: compute buffer (encode) = 594.22 MB whisper_init_state: compute buffer (cross) = 7.85 MB whisper_init_state: compute buffer (decode) = 142.09 MB

system_info: n_threads = 4 / 8 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 0 | COREML = 0 | OPENVINO = 0

main: processing 'samples/jfk.wav' (176000 samples, 11.0 sec), 4 threads, 1 processors, 1 beams + best of 5, lang = en, task = transcribe, timestamps = 1 ...

whisper_mel_init: n_len = 4100, n_len_org = 1099, n_mel = 80

[00:00:00.000 --> 00:00:11.000] And so my fellow Americans, ask not what your country can do for you, ask what you can do for your country.

whisper_print_timings: load time = 2482.54 ms whisper_print_timings: fallbacks = 0 p / 0 h whisper_print_timings: mel time = 182.05 ms whisper_print_timings: sample time = 41.48 ms / 1 runs ( 41.48 ms per run) whisper_print_timings: encode time = 233645.86 ms / 1 runs (233645.86 ms per run) whisper_print_timings: decode time = 2577285.00 ms / 27 runs (95455.00 ms per run) whisper_print_timings: batchd time = 0.00 ms / 1 runs ( 0.00 ms per run) whisper_print_timings: prompt time = 0.00 ms / 1 runs ( 0.00 ms per run) whisper_print_timings: total time = 2814680.00 ms

tannisroot commented 5 months ago

I have the same issue but with the SYCL backend

SummerAnna commented 1 month ago

In linux, is the gcc version must higher than 7.3.0? I compiled, occur the error: /home/gm/桌面/whisper.cpp-master/ggml/src/ggml-aarch64.c:46:54: 错误:implicit declaration of function ‘_mm256_set_m128i’; did you mean ‘_mm256_set_epi8’? [-Werror=implicit-function-declaration]

define GGML_F32Cx8x2_LOAD(x, y) _mm512_cvtph_ps(_mm256_set_m128i(_mm_loadu_si128((const __m128i )(y)), _mm_loadu_si128((const __m128i )(x))))