ggerganov / whisper.cpp

Port of OpenAI's Whisper model in C/C++
MIT License
34.53k stars 3.52k forks source link

ggml_new_object: not enough space in the context's memory pool #1820

Open officialasishkumar opened 7 months ago

officialasishkumar commented 7 months ago

While trying to run ./examples/talk , I'm getting this error:

charon@charon:~/coding/open-source/not-contributing/whisper.cpp$ ./talk -p Sanata
whisper_init_from_file_with_params_no_state: loading model from 'models/ggml-base.en.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab       = 51864
whisper_model_load: n_audio_ctx   = 1500
whisper_model_load: n_audio_state = 512
whisper_model_load: n_audio_head  = 8
whisper_model_load: n_audio_layer = 6
whisper_model_load: n_text_ctx    = 448
whisper_model_load: n_text_state  = 512
whisper_model_load: n_text_head   = 8
whisper_model_load: n_text_layer  = 6
whisper_model_load: n_mels        = 80
whisper_model_load: ftype         = 1
whisper_model_load: qntvr         = 0
whisper_model_load: type          = 2 (base)
whisper_model_load: adding 1607 extra tokens
whisper_model_load: n_langs       = 99
whisper_model_load:      CPU total size =   147.46 MB (1 buffers)
whisper_model_load: model size    =  147.37 MB
whisper_init_state: kv self size  =   16.52 MB
whisper_init_state: kv cross size =   18.43 MB
whisper_init_state: compute buffer (conv)   =   16.17 MB
whisper_init_state: compute buffer (encode) =   94.42 MB
whisper_init_state: compute buffer (cross)  =    5.08 MB
whisper_init_state: compute buffer (decode) =  105.96 MB
gpt2_model_load: loading model from 'models/ggml-gpt-2-117M.bin'
gpt2_model_load: n_vocab = 50257
gpt2_model_load: n_ctx   = 1024
gpt2_model_load: n_embd  = 768
gpt2_model_load: n_head  = 12
gpt2_model_load: n_layer = 12
gpt2_model_load: ftype   = 1
gpt2_model_load: ggml ctx size = 384.74 MB
ggml_new_object: not enough space in the context's memory pool (needed 403447760, available 403425792)
Segmentation fault (core dumped)
ggerganov commented 7 months ago

The example needs to be updated. See #1818