ggerganov / whisper.cpp

Port of OpenAI's Whisper model in C/C++
MIT License
34.97k stars 3.57k forks source link

On WSL 2 / Ubuntu, compiled with BLAS support and it doesn't use GPU at runtime #1413

Open spullara opened 11 months ago

spullara commented 11 months ago

Here is the output:

(base) sam@4090pc:~/whisper.cpp$ ./main -ojf -t 16 -p 4 -tdrz -m models/ggml-large.bin sampullarainterview.wav
whisper_init_from_file_no_state: loading model from 'models/ggml-large.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab       = 51865
whisper_model_load: n_audio_ctx   = 1500
whisper_model_load: n_audio_state = 1280
whisper_model_load: n_audio_head  = 20
whisper_model_load: n_audio_layer = 32
whisper_model_load: n_text_ctx    = 448
whisper_model_load: n_text_state  = 1280
whisper_model_load: n_text_head   = 20
whisper_model_load: n_text_layer  = 32
whisper_model_load: n_mels        = 80
whisper_model_load: ftype         = 1
whisper_model_load: qntvr         = 0
whisper_model_load: type          = 5
whisper_model_load: adding 1608 extra tokens
whisper_model_load: model ctx     = 2951.27 MB
ggml_init_cublas: found 1 CUDA devices:
  Device 0: NVIDIA GeForce RTX 4090, compute capability 8.9
whisper_model_load: model size    = 2950.66 MB
whisper_init_state: kv self size  =   70.00 MB
whisper_init_state: kv cross size =  234.38 MB
whisper_init_state: compute buffer (conv)   =   31.68 MB
whisper_init_state: compute buffer (encode) =  202.43 MB
whisper_init_state: compute buffer (cross)  =    8.79 MB
whisper_init_state: compute buffer (decode) =   59.30 MB

system_info: n_threads = 64 / 32 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | COREML = 0 | OPENVINO = 0 |

main: processing 'sampullarainterview.wav' (14126649 samples, 882.9 sec), 16 threads, 4 processors, lang = en, task = transcribe, tdrz = 1, timestamps = 1 ...

whisper_init_state: kv self size  =   70.00 MB
whisper_init_state: kv cross size =  234.38 MB
whisper_init_state: compute buffer (conv)   =   31.68 MB
whisper_init_state: compute buffer (encode) =  202.43 MB
whisper_init_state: compute buffer (cross)  =    8.79 MB
whisper_init_state: compute buffer (decode) =   59.30 MB
whisper_init_state: kv self size  =   70.00 MB
whisper_init_state: kv cross size =  234.38 MB
whisper_init_state: compute buffer (conv)   =   31.68 MB
whisper_init_state: compute buffer (encode) =  202.43 MB
whisper_init_state: compute buffer (cross)  =    8.79 MB
whisper_init_state: compute buffer (decode) =   59.30 MB
whisper_init_state: kv self size  =   70.00 MB
whisper_init_state: kv cross size =  234.38 MB
whisper_init_state: compute buffer (conv)   =   31.68 MB
whisper_init_state: compute buffer (encode) =  202.43 MB
whisper_init_state: compute buffer (cross)  =    8.79 MB
whisper_init_state: compute buffer (decode) =   59.30 MB

GPU detected, BLAS=1 but still running entirely on CPU.

bobqianic commented 11 months ago

This is to be expected as full GPU offload hasn't been implemented on NVIDIA GPUs yet.