go-skynet / go-llama.cpp

LLama.cpp golang bindings
MIT License
657 stars 79 forks source link

Long Load times on Apple Metal #109

Closed pawalt closed 1 year ago

pawalt commented 1 year ago

Hey folks, thanks for making this library! I'm looking forward to using it in my own code. When I try to use apple metal, inference moves quickly once the model is loaded, but the load times are very long. Any idea what's going on? I'm on an M1 macbook pro with the M1 pro. Thanks!

$ ./main -m ../gptlsp/models/WizardLM-7B-uncensored.ggmlv3.q4_0.bin -t 1 -ngl 1 
llama.cpp: loading model from ../gptlsp/models/WizardLM-7B-uncensored.ggmlv3.q4_0.bin
llama_model_load_internal: format     = ggjt v3 (latest)
llama_model_load_internal: n_vocab    = 32001
llama_model_load_internal: n_ctx      = 128
llama_model_load_internal: n_embd     = 4096
llama_model_load_internal: n_mult     = 256
llama_model_load_internal: n_head     = 32
llama_model_load_internal: n_layer    = 32
llama_model_load_internal: n_rot      = 128
llama_model_load_internal: ftype      = 2 (mostly Q4_0)
llama_model_load_internal: n_ff       = 11008
llama_model_load_internal: n_parts    = 1
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size =    0.07 MB
llama_model_load_internal: mem required  = 5407.72 MB (+ 1026.00 MB per state)
...................................................................................................
llama_init_from_file: kv self size  =   64.00 MB
ggml_metal_init: allocating
ggml_metal_init: using MPS
ggml_metal_init: loading '/Users/peytonwalters/projects/go-llama.cpp/ggml-metal.metal'
ggml_metal_init: loaded kernel_add                            0x148609f80
ggml_metal_init: loaded kernel_mul                            0x14860ab70
ggml_metal_init: loaded kernel_mul_row                        0x14860b160
ggml_metal_init: loaded kernel_scale                          0x14860b750
ggml_metal_init: loaded kernel_silu                           0x14860bd40
ggml_metal_init: loaded kernel_relu                           0x14860c330
ggml_metal_init: loaded kernel_gelu                           0x14860c920
ggml_metal_init: loaded kernel_soft_max                       0x14860d230
ggml_metal_init: loaded kernel_diag_mask_inf                  0x14860d940
ggml_metal_init: loaded kernel_get_rows_f16                   0x14860e080
ggml_metal_init: loaded kernel_get_rows_q4_0                  0x14860e630
ggml_metal_init: loaded kernel_get_rows_q4_1                  0x14860ed50
ggml_metal_init: loaded kernel_get_rows_q2_k                  0x14860f300
ggml_metal_init: loaded kernel_get_rows_q3_k                  0x14860f8b0
ggml_metal_init: loaded kernel_get_rows_q4_k                  0x14860fe60
ggml_metal_init: loaded kernel_get_rows_q5_k                  0x148610410
ggml_metal_init: loaded kernel_get_rows_q6_k                  0x1486109c0
ggml_metal_init: loaded kernel_rms_norm                       0x1486112d0
ggml_metal_init: loaded kernel_mul_mat_f16_f32                0x148611bf0
ggml_metal_init: loaded kernel_mul_mat_q4_0_f32               0x148612360
ggml_metal_init: loaded kernel_mul_mat_q4_1_f32               0x148612960
ggml_metal_init: loaded kernel_mul_mat_q2_k_f32               0x148612f60
ggml_metal_init: loaded kernel_mul_mat_q3_k_f32               0x148613580
ggml_metal_init: loaded kernel_mul_mat_q4_k_f32               0x148613d00
ggml_metal_init: loaded kernel_mul_mat_q5_k_f32               0x148614300
ggml_metal_init: loaded kernel_mul_mat_q6_k_f32               0x148614900
ggml_metal_init: loaded kernel_rope                           0x148615450
ggml_metal_init: loaded kernel_cpy_f32_f16                    0x148616150
ggml_metal_init: loaded kernel_cpy_f32_f32                    0x148616c50
ggml_metal_add_buffer: allocated 'data            ' buffer, size =  3616.08 MB
ggml_metal_add_buffer: allocated 'eval            ' buffer, size =   768.00 MB
ggml_metal_add_buffer: allocated 'kv              ' buffer, size =    66.00 MB
ggml_metal_add_buffer: allocated 'scr0            ' buffer, size =   512.00 MB
ggml_metal_add_buffer: allocated 'scr1            ' buffer, size =   512.00 MB
Model loaded successfully.
>>> write a python function to glob files

Sending write a python function to glob files

```python
import os
def glob_files(path):
    for dirpath, dirnames, filenames in os.walk(path):
        for file in filenames:
            print("Found file:", file)

This function uses the os.walk() function to recursively traverse the directory tree rooted at the given path. It then iterates through each subdirectory and its contents, printing out any files found. The function can be customized to filter or limit the output as needed. llama_print_timings: load time = 276625.80 ms llama_print_timings: sample time = 78.89 ms / 112 runs ( 0.70 ms per token) llama_print_timings: prompt eval time = 2083.72 ms / 9 tokens ( 231.52 ms per token) llama_print_timings: eval time = 5143.88 ms / 111 runs ( 46.34 ms per token) llama_print_timings: total time = 7317.77 ms Embeddings: [0.3125355 -0.49622384 0.8355821 -5.7579308 0.7423292 0.9061695 1.0729696 0.55204344 -0.4947705 0.76542914 -0.08652469 -0.92549866 0.56039613 1.3558147 1.5783771 2.0912528 0.83712846 2.230342 1.7294012 0.91197115 2.3878574 0.17538485 1.0380409 -1.9442692 -0.75153935 -0.46291435 -1.0816864 -1.6969684 2.5978549 0.07971512 2.4507911 -0.43762016 -1.121216 3.2154222 -1.623437 2.3818705 -0.54442763 -0.036091648 0.44333118 1.4332048 -2.0323622 -4.9638834 -0.10573294 -0.6578927 -2.525819 -1.7579216 -0.082311146 -1.282202 -0.3188231 -0.30106577 0.18109736 -1.5109167 -0.46574533 -0.7783518 1.4447253 -0.07444402 0.92188036 -1.6830924 0.9693521 0.6487904 0.00760857 -1.8059875 0.028990952 5.1491437 0.30388048 0.8748815 1.0947195 -0.81968737 2.2706778 -2.470005 -1.365771 1.4964849 0.9554498 0.9874882 -0.21673784 -0.9545497 2.3141744 -0.13471882 -0.72715926 3.1383698 1.1182431 -0.19498359 1.4391023 -1.9303225 0.10249053 0.14131173 -0.8227461 -0.73139864 0.05322086 -0.9561012 0.12730123 2.825649 -1.9981018 0.1599991 0.60884464 -0.4810178 -1.1864207 0.40985224 0.018653302 -0.80309135 1.0827796 -0.9229907 2.0702777 -0.97203946 -2.9991443 -1.6874504 1.6150079 -0.31088957 -2.1397588 -0.3066519 0.3554739 1.0634847 -0.5357307 -0.30370006 -0.3489394 0.13483998 0.28904593 -0.11815561 0.8544113 -4.3085527 0.82415545 0.07161723 -0.1693164 1.4458497 0.8364489 3.3607686 2.5930157 -1.0991863]

pawalt commented 1 year ago

🤦‍♂️ I had to hit return twice to get the model starting to do inference. Please ignore :)