Open jiabochao opened 2 months ago
While loading the model, the program exits with the following error:
whisper_init_from_file_with_params_no_state: loading model from '/Users/bochao/models/whisper/large-v3/ggml-large-v3-q5_0.bin' whisper_model_load: loading model whisper_model_load: n_vocab = 51866 whisper_model_load: n_audio_ctx = 1500 whisper_model_load: n_audio_state = 1280 whisper_model_load: n_audio_head = 20 whisper_model_load: n_audio_layer = 32 whisper_model_load: n_text_ctx = 448 whisper_model_load: n_text_state = 1280 whisper_model_load: n_text_head = 20 whisper_model_load: n_text_layer = 32 whisper_model_load: n_mels = 128 whisper_model_load: ftype = 8 whisper_model_load: qntvr = 2 whisper_model_load: type = 5 (large v3) whisper_model_load: adding 1609 extra tokens whisper_model_load: n_langs = 100 whisper_backend_init: using Metal backend ggml_metal_init: allocating ggml_metal_init: found device: Apple M2 Pro ggml_metal_init: picking default device: Apple M2 Pro ggml_metal_init: default.metallib not found, loading from source ggml_metal_init: GGML_METAL_PATH_RESOURCES = nil ggml_metal_init: error: could not use bundle path to find ggml-metal.metal, falling back to trying cwd ggml_metal_init: loading 'ggml-metal.metal' ggml_metal_init: GPU name: Apple M2 Pro ggml_metal_init: GPU family: MTLGPUFamilyApple8 (1008) ggml_metal_init: GPU family: MTLGPUFamilyCommon3 (3003) ggml_metal_init: GPU family: MTLGPUFamilyMetal3 (5001) ggml_metal_init: simdgroup reduction support = true ggml_metal_init: simdgroup matrix mul. support = true ggml_metal_init: hasUnifiedMemory = true ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB ggml_backend_metal_buffer_type_alloc_buffer: allocated buffer, size = 1030.91 MiB, ( 1033.02 / 10922.67) whisper_model_load: Metal buffer size = 1080.97 MB Assertion failed: (((uintptr_t)addr % talloc->alignment) == 0), function ggml_tallocr_alloc, file ggml-alloc.c, line 101. ELIFECYCLE Command failed with exit code 1.
Version:
whisper-rs = {version = "0.11.1", features = ["metal"] }
This problem only occurs when rust binding with llama.cpp is also used, for example:
While loading the model, the program exits with the following error:
Version: