ggerganov / llama.cpp

LLM inference in C/C++
MIT License
66.45k stars 9.56k forks source link

Segmentation fault during inference (Android Google Pixel 8 Pro) #3690

Closed theoctopusride closed 12 months ago

theoctopusride commented 1 year ago

Prerequisites

Please answer the following questions for yourself before submitting an issue.

Current Behavior

set up:

pkg update
termux-setup-storage
pkg install python3
apt install -y clang ndk-multilib git make
git clone --depth 1 https://github.com/ggerganov/llama.cpp
make -C llama.cpp -j4
gdb llama.cpp/server
(gdb) r -m /sdcard/Download/llama-2-7b-chat.Q3_K_S.gguf       
Starting program: /data/data/com.termux/files/home/llama.cpp/server -m /sdcard/Download/llama-2-7b-chat.Q3_K_S.gguf 
[Thread debugging using libthread_db enabled]                 
Using host libthread_db library "/data/data/com.termux/files/usr/lib/libthread_db.so".                                      
warning: section .note.gnu.build-id not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libdl.so         
warning: section .dynsym not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libdl.so                    
warning: section .gnu.version not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libdl.so               
warning: section .gnu.version_d not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libdl.so             
warning: section .gnu.hash not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libdl.so                  
warning: section .dynstr not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libdl.so                    
warning: section .rela.plt not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libdl.so                  
warning: section .eh_frame_hdr not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libdl.so              
warning: section .eh_frame not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libdl.so                  
warning: section .plt not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libdl.so                       
warning: section .dynamic not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libdl.so                   
warning: section .got.plt not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libdl.so                   
warning: section .bss not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libdl.so                       
warning: section .note.android.ident not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so         
warning: section .note.gnu.build-id not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so          
warning: section .dynsym not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so                     
warning: section .gnu.version not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so                
warning: section .gnu.version_d not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so              
warning: section .gnu.version_r not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so              
warning: section .gnu.hash not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so                   
warning: section .dynstr not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so                     
warning: section .rela.dyn not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so                   
warning: section .relr.dyn not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so                   
warning: section .rela.plt not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so                   
warning: section .rodata not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so                     
warning: section .eh_frame_hdr not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so               
warning: section .eh_frame not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so                   
warning: section .plt not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so                        
warning: section .data.rel.ro not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so                
warning: section .fini_array not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so                 
warning: section .dynamic not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so                    
warning: section .got not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so                        
warning: section .got.plt not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so                    
warning: section .data not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so                       
warning: section .bss not found in .gnu_debugdata for /apex/com.android.runtime/lib64/bionic/libm.so                        
warning: section .note.android.ident not found in .gnu_debugdata for /system/lib64/libnetd_client.so                        
warning: section .note.gnu.build-id not found in .gnu_debugdata for /system/lib64/libnetd_client.so                         
warning: section .dynsym not found in .gnu_debugdata for /system/lib64/libnetd_client.so                                    
warning: section .gnu.version not found in .gnu_debugdata for /system/lib64/libnetd_client.so                               
warning: section .gnu.version_r not found in .gnu_debugdata for /system/lib64/libnetd_client.so                             
warning: section .gnu.hash not found in .gnu_debugdata for /system/lib64/libnetd_client.so                                  
warning: section .dynstr not found in .gnu_debugdata for /system/lib64/libnetd_client.so                                    
warning: section .rela.dyn not found in .gnu_debugdata for /system/lib64/libnetd_client.so                                  
warning: section .relr.dyn not found in .gnu_debugdata for /system/lib64/libnetd_client.so                                  
warning: section .rela.plt not found in .gnu_debugdata for /system/lib64/libnetd_client.so                                  
warning: section .rodata not found in .gnu_debugdata for /system/lib64/libnetd_client.so                                    
warning: section .eh_frame_hdr not found in .gnu_debugdata for /system/lib64/libnetd_client.so                              
warning: section .eh_frame not found in .gnu_debugdata for /system/lib64/libnetd_client.so                                  
warning: section .plt not found in .gnu_debugdata for /system/lib64/libnetd_client.so                                       
warning: section .data.rel.ro not found in .gnu_debugdata for /system/lib64/libnetd_client.so                               
warning: section .fini_array not found in .gnu_debugdata for /system/lib64/libnetd_client.so                                
warning: section .dynamic not found in .gnu_debugdata for /system/lib64/libnetd_client.so                                   
warning: section .got not found in .gnu_debugdata for /system/lib64/libnetd_client.so                                       
warning: section .got.plt not found in .gnu_debugdata for /system/lib64/libnetd_client.so                                   
warning: section .data not found in .gnu_debugdata for /system/lib64/libnetd_client.so                                      
warning: section .bss not found in .gnu_debugdata for /system/lib64/libnetd_client.so                                       
warning: section .note.android.ident not found in .gnu_debugdata for /system/lib64/libc++.so                                
warning: section .note.gnu.build-id not found in .gnu_debugdata for /system/lib64/libc++.so                                 
warning: section .dynsym not found in .gnu_debugdata for /system/lib64/libc++.so                                            
warning: section .gnu.version not found in .gnu_debugdata for /system/lib64/libc++.so                                       
warning: section .gnu.version_r not found in .gnu_debugdata for /system/lib64/libc++.so                                     
warning: section .gnu.hash not found in .gnu_debugdata for /system/lib64/libc++.so                                          
warning: section .dynstr not found in .gnu_debugdata for /system/lib64/libc++.so                                            
warning: section .rela.dyn not found in .gnu_debugdata for /system/lib64/libc++.so                                          
warning: section .relr.dyn not found in .gnu_debugdata for /system/lib64/libc++.so                                          
warning: section .rela.plt not found in .gnu_debugdata for /system/lib64/libc++.so                                          
warning: section .rodata not found in .gnu_debugdata for /system/lib64/libc++.so                                            
warning: section .gcc_except_table not found in .gnu_debugdata for /system/lib64/libc++.so                                  
warning: section .eh_frame_hdr not found in .gnu_debugdata for /system/lib64/libc++.so                                      
warning: section .eh_frame not found in .gnu_debugdata for /system/lib64/libc++.so                                          
warning: section .plt not found in .gnu_debugdata for /system/lib64/libc++.so                                               
warning: section .data.rel.ro not found in .gnu_debugdata for /system/lib64/libc++.so                                       
warning: section .fini_array not found in .gnu_debugdata for /system/lib64/libc++.so                                        
warning: section .init_array not found in .gnu_debugdata for /system/lib64/libc++.so                                        
warning: section .dynamic not found in .gnu_debugdata for /system/lib64/libc++.so                                           
warning: section .got not found in .gnu_debugdata for /system/lib64/libc++.so                                               
warning: section .got.plt not found in .gnu_debugdata for /system/lib64/libc++.so                                           
warning: section .data not found in .gnu_debugdata for /system/lib64/libc++.so                                              
warning: section .bss not found in .gnu_debugdata for /system/lib64/libc++.so                                               
{"timestamp":1697750270,"level":"INFO","function":"main","line":1326,"message":"build info","build":1,"commit":"f3b25e4"}   
{"timestamp":1697750270,"level":"INFO","function":"main","line":1332,"message":"system info","n_threads":4,"n_threads_batch":-1,"total_threads":9,
"system_info":"AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 0 | SSSE3 = 0 | VSX = 0 | "}                   
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /sdcard/Download/llama-2-7b-chat.Q3_K_S.gguf (version GGUF V2 (latest)) 
llama_model_loader: - tensor    0:                token_embd.weight q3_K     [  4096, 32000,     1,     1 ] 
llama_model_loader: - tensor    1:           blk.0.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor    2:            blk.0.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor    3:            blk.0.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor    4:              blk.0.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor    5:            blk.0.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor    6:              blk.0.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor    7:         blk.0.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor    8:              blk.0.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor    9:              blk.0.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   10:           blk.1.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   11:            blk.1.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor   12:            blk.1.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   13:              blk.1.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   14:            blk.1.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   15:              blk.1.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   16:         blk.1.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   17:              blk.1.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   18:              blk.1.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   19:          blk.10.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   20:           blk.10.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor   21:           blk.10.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   22:             blk.10.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   23:           blk.10.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   24:             blk.10.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   25:        blk.10.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   26:             blk.10.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   27:             blk.10.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   28:          blk.11.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   29:           blk.11.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor   30:           blk.11.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   31:             blk.11.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   32:           blk.11.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   33:             blk.11.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   34:        blk.11.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   35:             blk.11.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   36:             blk.11.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   37:          blk.12.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   38:           blk.12.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor   39:           blk.12.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   40:             blk.12.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   41:           blk.12.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   42:             blk.12.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   43:        blk.12.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   44:             blk.12.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   45:             blk.12.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   46:          blk.13.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   47:           blk.13.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor   48:           blk.13.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   49:             blk.13.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   50:           blk.13.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   51:             blk.13.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   52:        blk.13.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   53:             blk.13.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   54:             blk.13.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   55:          blk.14.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   56:           blk.14.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor   57:           blk.14.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   58:             blk.14.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   59:           blk.14.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   60:             blk.14.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   61:        blk.14.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   62:             blk.14.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   63:             blk.14.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   64:          blk.15.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   65:           blk.15.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor   66:           blk.15.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   67:             blk.15.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   68:           blk.15.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   69:             blk.15.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   70:        blk.15.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   71:             blk.15.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   72:             blk.15.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   73:          blk.16.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   74:           blk.16.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor   75:           blk.16.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   76:             blk.16.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   77:           blk.16.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   78:             blk.16.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   79:        blk.16.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   80:             blk.16.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   81:             blk.16.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   82:          blk.17.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   83:           blk.17.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor   84:           blk.17.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   85:             blk.17.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   86:           blk.17.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   87:             blk.17.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   88:        blk.17.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   89:             blk.17.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   90:             blk.17.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   91:          blk.18.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   92:           blk.18.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor   93:           blk.18.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   94:             blk.18.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor   95:           blk.18.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor   96:             blk.18.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   97:        blk.18.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   98:             blk.18.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor   99:             blk.18.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  100:          blk.19.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  101:           blk.19.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor  102:           blk.19.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  103:             blk.19.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  104:           blk.19.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  105:             blk.19.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  106:        blk.19.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  107:             blk.19.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  108:             blk.19.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  109:           blk.2.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  110:            blk.2.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor  111:            blk.2.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  112:              blk.2.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  113:            blk.2.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  114:              blk.2.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  115:         blk.2.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  116:              blk.2.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  117:              blk.2.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  118:          blk.20.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  119:           blk.20.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor  120:           blk.20.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  121:             blk.20.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  122:           blk.20.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  123:             blk.20.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  124:        blk.20.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  125:             blk.20.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  126:             blk.20.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  127:          blk.21.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  128:           blk.21.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor  129:           blk.21.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  130:             blk.21.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  131:           blk.21.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  132:             blk.21.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  133:        blk.21.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  134:             blk.21.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  135:             blk.21.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  136:          blk.22.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  137:           blk.22.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor  138:           blk.22.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  139:             blk.22.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  140:           blk.22.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  141:             blk.22.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  142:        blk.22.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  143:             blk.22.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  144:             blk.22.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  145:          blk.23.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  146:           blk.23.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ]                 
llama_model_loader: - tensor  147:           blk.23.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  148:             blk.23.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  149:           blk.23.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  150:             blk.23.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  151:        blk.23.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  152:             blk.23.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  153:             blk.23.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  154:           blk.3.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  155:            blk.3.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor  156:            blk.3.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  157:              blk.3.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  158:            blk.3.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  159:              blk.3.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  160:         blk.3.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  161:              blk.3.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  162:              blk.3.attn_v.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  163:           blk.4.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  164:            blk.4.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor  165:            blk.4.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ]                 
llama_model_loader: - tensor  166:              blk.4.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  167:            blk.4.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  168:              blk.4.attn_k.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  169:         blk.4.attn_output.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  170:              blk.4.attn_q.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  171:              blk.4.attn_v.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  172:           blk.5.attn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  173:            blk.5.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ]                 
llama_model_loader: - tensor  174:            blk.5.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ]                 
llama_model_loader: - tensor  175:              blk.5.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ]                 
llama_model_loader: - tensor  176:            blk.5.ffn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  177:              blk.5.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  178:         blk.5.attn_output.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  179:              blk.5.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  180:              blk.5.attn_v.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  181:           blk.6.attn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  182:            blk.6.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor  183:            blk.6.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  184:              blk.6.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  185:            blk.6.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  186:              blk.6.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  187:         blk.6.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  188:              blk.6.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  189:              blk.6.attn_v.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  190:           blk.7.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  191:            blk.7.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor  192:            blk.7.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  193:              blk.7.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  194:            blk.7.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  195:              blk.7.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  196:         blk.7.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  197:              blk.7.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  198:              blk.7.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  199:           blk.8.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  200:            blk.8.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor  201:            blk.8.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  202:              blk.8.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  203:            blk.8.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  204:              blk.8.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  205:         blk.8.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  206:              blk.8.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  207:              blk.8.attn_v.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  208:           blk.9.attn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  209:            blk.9.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ] 
llama_model_loader: - tensor  210:            blk.9.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  211:              blk.9.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  212:            blk.9.ffn_norm.weight f32      [  4096,     1,     1,     1 ] 
llama_model_loader: - tensor  213:              blk.9.attn_k.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  214:         blk.9.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  215:              blk.9.attn_q.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  216:              blk.9.attn_v.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  217:                    output.weight q6_K     [  4096, 32000,     1,     1 ] 
llama_model_loader: - tensor  218:          blk.24.attn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  219:           blk.24.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ]                 
llama_model_loader: - tensor  220:           blk.24.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  221:             blk.24.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ] 
llama_model_loader: - tensor  222:           blk.24.ffn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  223:             blk.24.attn_k.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  224:        blk.24.attn_output.weight q3_K     [  4096,  4096,     1,     1 ] 
llama_model_loader: - tensor  225:             blk.24.attn_q.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  226:             blk.24.attn_v.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  227:          blk.25.attn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  228:           blk.25.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ]                 
llama_model_loader: - tensor  229:           blk.25.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ]                 
llama_model_loader: - tensor  230:             blk.25.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ]                 
llama_model_loader: - tensor  231:           blk.25.ffn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  232:             blk.25.attn_k.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  233:        blk.25.attn_output.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  234:             blk.25.attn_q.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  235:             blk.25.attn_v.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  236:          blk.26.attn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  237:           blk.26.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ]                 
llama_model_loader: - tensor  238:           blk.26.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ]                 
llama_model_loader: - tensor  239:             blk.26.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ]                 
llama_model_loader: - tensor  240:           blk.26.ffn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  241:             blk.26.attn_k.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  242:        blk.26.attn_output.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  243:             blk.26.attn_q.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  244:             blk.26.attn_v.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  245:          blk.27.attn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  246:           blk.27.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ]                 
llama_model_loader: - tensor  247:           blk.27.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ]                 
llama_model_loader: - tensor  248:             blk.27.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ]                 
llama_model_loader: - tensor  249:           blk.27.ffn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  250:             blk.27.attn_k.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  251:        blk.27.attn_output.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  252:             blk.27.attn_q.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  253:             blk.27.attn_v.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  254:          blk.28.attn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  255:           blk.28.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ]                 
llama_model_loader: - tensor  256:           blk.28.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ]                 
llama_model_loader: - tensor  257:             blk.28.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ]                 
llama_model_loader: - tensor  258:           blk.28.ffn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  259:             blk.28.attn_k.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  260:        blk.28.attn_output.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  261:             blk.28.attn_q.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  262:             blk.28.attn_v.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  263:          blk.29.attn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  264:           blk.29.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ]                 
llama_model_loader: - tensor  265:           blk.29.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ]                 
llama_model_loader: - tensor  266:             blk.29.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ]                 
llama_model_loader: - tensor  267:           blk.29.ffn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  268:             blk.29.attn_k.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  269:        blk.29.attn_output.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  270:             blk.29.attn_q.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  271:             blk.29.attn_v.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  272:          blk.30.attn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  273:           blk.30.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ]                 
llama_model_loader: - tensor  274:           blk.30.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ]                 
llama_model_loader: - tensor  275:             blk.30.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ]                 
llama_model_loader: - tensor  276:           blk.30.ffn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  277:             blk.30.attn_k.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  278:        blk.30.attn_output.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  279:             blk.30.attn_q.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  280:             blk.30.attn_v.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  281:          blk.31.attn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  282:           blk.31.ffn_down.weight q3_K     [ 11008,  4096,     1,     1 ]                 
llama_model_loader: - tensor  283:           blk.31.ffn_gate.weight q3_K     [  4096, 11008,     1,     1 ]                 
llama_model_loader: - tensor  284:             blk.31.ffn_up.weight q3_K     [  4096, 11008,     1,     1 ]                 
llama_model_loader: - tensor  285:           blk.31.ffn_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - tensor  286:             blk.31.attn_k.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  287:        blk.31.attn_output.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  288:             blk.31.attn_q.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  289:             blk.31.attn_v.weight q3_K     [  4096,  4096,     1,     1 ]                 
llama_model_loader: - tensor  290:               output_norm.weight f32      [  4096,     1,     1,     1 ]                 
llama_model_loader: - kv   0:                       general.architecture str                                                
llama_model_loader: - kv   1:                               general.name str                                                
llama_model_loader: - kv   2:                       llama.context_length u32                                                
llama_model_loader: - kv   3:                     llama.embedding_length u32                                                
llama_model_loader: - kv   4:                          llama.block_count u32                                                
llama_model_loader: - kv   5:                  llama.feed_forward_length u32                                                
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32                                                
llama_model_loader: - kv   7:                 llama.attention.head_count u32                                                
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32                                                
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32                                                
llama_model_loader: - kv  10:                          general.file_type u32                                                
llama_model_loader: - kv  11:                       tokenizer.ggml.model str                                                
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr                                                
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr                                                
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr                                                
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32                                                
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32                                                
llama_model_loader: - kv  17:            tokenizer.ggml.unknown_token_id u32                                                
llama_model_loader: - kv  18:               general.quantization_version u32                                                
llama_model_loader: - type  f32:   65 tensors 
llama_model_loader: - type q3_K:  225 tensors                 
llama_model_loader: - type q6_K:    1 tensors 
llm_load_vocab: special tokens definition check successful ( 259/32000 ). 
llm_load_print_meta: format           = GGUF V2 (latest)      
llm_load_print_meta: arch             = llama 
llm_load_print_meta: vocab type       = SPM                   
llm_load_print_meta: n_vocab          = 32000 
llm_load_print_meta: n_merges         = 0                     
llm_load_print_meta: n_ctx_train      = 4096 
llm_load_print_meta: n_embd           = 4096                  
llm_load_print_meta: n_head           = 32 
llm_load_print_meta: n_head_kv        = 32                    
llm_load_print_meta: n_layer          = 32 
llm_load_print_meta: n_rot            = 128                   
llm_load_print_meta: n_gqa            = 1 
llm_load_print_meta: f_norm_eps       = 0.0e+00               
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06 
llm_load_print_meta: f_clamp_kqv      = 0.0e+00               
llm_load_print_meta: f_max_alibi_bias = 0.0e+00 
llm_load_print_meta: n_ff             = 11008                 
llm_load_print_meta: freq_base_train  = 10000.0 
llm_load_print_meta: freq_scale_train = 1                     
llm_load_print_meta: model type       = 7B 
llm_load_print_meta: model ftype      = mostly Q3_K - Small   
llm_load_print_meta: model params     = 6.74 B 
llm_load_print_meta: model size       = 2.75 GiB (3.50 BPW)   
llm_load_print_meta: general.name   = LLaMA v2 
llm_load_print_meta: BOS token = 1 '<s>'                      
llm_load_print_meta: EOS token = 2 '</s>' 
llm_load_print_meta: UNK token = 0 '<unk>'                    
llm_load_print_meta: LF token  = 13 '<0x0A>' 
llm_load_tensors: ggml ctx size =    0.10 MB                  
llm_load_tensors: mem required  = 2811.11 MB 
................................................................................................. 
llama_new_context_with_model: n_ctx      = 512                
llama_new_context_with_model: freq_base  = 10000.0 
llama_new_context_with_model: freq_scale = 1                  
llama_new_context_with_model: kv self size  =  256.00 MB 
llama_new_context_with_model: compute buffer total size = 76.63 MB 
[New Thread 0x78be (LWP 30910)]                               
[New Thread 0x78bf (LWP 30911)] 
[New Thread 0x78c1 (LWP 30913)]                               
[Thread 0x78c1 (LWP 30913) exited] 
[Thread 0x78bf (LWP 30911) exited]                            
[Thread 0x78be (LWP 30910) exited] 

llama server listening at http://127.0.0.1:8080 

{"timestamp":1697750271,"level":"INFO","function":"main","line":1758,"message":"HTTP server listening","hostname":"127.0.0.1","port":8080} 
[New Thread 0x78c2 (LWP 30914)]                               
[New Thread 0x78c3 (LWP 30915)] 
[New Thread 0x78c4 (LWP 30916)]                               
[New Thread 0x78c5 (LWP 30917)] 
[New Thread 0x78c6 (LWP 30918)]                               
[New Thread 0x78c7 (LWP 30919)] 
[New Thread 0x78c8 (LWP 30920)]                               
[New Thread 0x78ca (LWP 30922)] 

 Thread 5 "server" received signal SIGSEGV, Segmentation fault. 
[Switching to Thread 0x78c2 (LWP 30914)]                      
0x0000005555685750 in llama_sampling_free(llama_sampling_context*) ()                                                       
(gdb) bt 
#0  0x0000005555685750 in llama_sampling_free(llama_sampling_context*) () 
#1  0x00000055555e47d0 in llama_server_context::rewind() ()   
#2  0x00000055555a8364 in std::__ndk1::__function::__func<main::$_4, 
std::__ndk1::allocator<main::$_4>, void (httplib::Request const&, httplib::Response&)>::operator()(httplib::Request const&, httplib::Response&) ()                                 
#3  0x00000055555dbc98 in httplib::Server::dispatch_request(httplib::Request&, httplib::Response&, 
std::__ndk1::vector<std::__ndk1::pair<std::__ndk1::basic_regex<char, 
std::__ndk1::regex_traits<char> >, 
std::__ndk1::function<void (httplib::Request const&, httplib::Response&)> >, 
std::__ndk1::allocator<std::__ndk1::pair<std::__ndk1::basic_regex<char, 
std::__ndk1::regex_traits<char> >, std::__ndk1::function<void (httplib::Request const&, httplib::Response&)> > > > const&) ()                   
#4  0x00000055555c977c in httplib::Server::routing(httplib::Request&, httplib::Response&, httplib::Stream&) ()              
#5  0x00000055555c7764 in httplib::Server::process_request(httplib::Stream&, bool, bool&, std::__ndk1::function<void (httplib::Request&)> const&) () 
#6  0x00000055555c69dc in httplib::detail::process_server_socket<httplib::Server::process_and_close_socket(int)::{lambda(httplib::Stream&, bool, 
bool&)#1}>(std::__ndk1::atomic<int> const&, int, unsigned long, long, long, long, long, long, 
httplib::Server::process_and_close_socket(int)::{lambda(httplib::Stream&, bool, bool&)#1})::{lambda(bool, bool&)#1}::operator()(bool, bool&) const ()                                              
#7  0x00000055555c6890 in bool httplib::detail::process_server_socket_core<httplib::detail::process_server_socket<httplib::Server::process_and_close_socket(int)::{lambda(httplib::Stream&, bool, 
bool&)#1}>(std::__ndk1::atomic<int> const&, int, unsigned long, long, long, long, long, long, 
httplib::Server::process_and_close_socket(int)::{lambda(httplib::Stream&, bool, bool&)#1})::{lambda(bool, bool&)#1}>(std::__ndk1::atomic<int> const&, int, unsigned long, long, 
httplib::detail::process_server_--Type <RET> for more, q to quit, c to continue without paging--                                                            
socket<httplib::Server::process_and_close_socket(int)::{lambda(httplib::Stream&, bool, bool&)#1}>(std::__ndk1::atomic<int> const&, int, unsigned long, long, long, long, long, long, 
httplib::Server::process_and_close_socket(int)::{lambda(httplib::Stream&, bool, bool&)#1})::{lambda(bool, bool&)#1}) () 
#8  0x00000055555b40f4 in httplib::Server::process_and_close_socket(int) () 
#9  0x00000055555bb4b8 in httplib::ThreadPool::worker::operator()() () 
#10 0x00000055555bb224 in void* std::__ndk1::__thread_proxy<std::__ndk1::tuple<std::__ndk1::unique_ptr<std::__ndk1::__thread_struct, 
std::__ndk1::default_delete<std::__ndk1::__thread_struct> >, httplib::ThreadPool::worker> >(void*) () 
#11 0x0000007ff4d5fcd0 in __pthread_start(void*) ()              from /apex/com.android.runtime/lib64/bionic/libc.so 
#12 0x0000007ff4cf3b04 in __start_thread ()                      from /apex/com.android.runtime/lib64/bionic/libc.so 

Environment and Context

Please provide detailed information about your computer setup. This is important in case the issue is not reproducible except for under certain specific conditions.

$ lscpu

Architecture:                    aarch64
CPU op-mode(s):                  64-bit
Byte Order:                      Little Endian
CPU(s):                          9
On-line CPU(s) list:             0-8
Vendor ID:                       ARM
Model name:                      Cortex-A510
Model:                           1
Thread(s) per core:              1
Core(s) per socket:              4
Socket(s):                       1
Stepping:                        r1p1
CPU(s) scaling MHz:              48%
CPU max MHz:                     1704.0000
CPU min MHz:                     324.0000
BogoMIPS:                        49.15
Flags:                           fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bti
Model name:                      Cortex-A715
Model:                           0
Thread(s) per core:              1
Core(s) per socket:              4
Socket(s):                       1
Stepping:                        r1p0
CPU(s) scaling MHz:              66%
CPU max MHz:                     2367.0000
CPU min MHz:                     402.0000
BogoMIPS:                        49.15
Flags:                           fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 svesm4 flagm2 frint svei8mm svebf16 i8mm bti
Model name:                      -
Model:                           0
Thread(s) per core:              1
Core(s) per socket:              1
Socket(s):                       1
CPU(s) scaling MHz:              40%
CPU max MHz:                     2914.0000
CPU min MHz:                     500.0000
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Mmio stale data:   Not affected
Vulnerability Retbleed:          Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1:        Mitigation; __user pointer sanitization
Vulnerability Spectre v2:        Not affected
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected

$ uname -a

Linux localhost 5.15.110-android14-11-ga6d7915820a0-ab10726252 #1 SMP PREEMPT Mon Aug 28 18:42:09 UTC 2023 aarch64 Android
$ python3 --version
Python 3.11.6
$ make --version
GNU Make 4.4.1
$ g++ --version
clang version 17.0.2
Target: aarch64-unknown-linux-android24
Thread model: posix
InstalledDir: /data/data/com.termux/files/usr/bin
$ llc --version
LLVM (http://llvm.org/):
  LLVM version 17.0.2
  Optimized build.
  Default target: aarch64-unknown-linux-android24
  Host CPU: cortex-x3

  Registered Targets:
    aarch64     - AArch64 (little endian)
    aarch64_32  - AArch64 (little endian ILP32)
    aarch64_be  - AArch64 (big endian)
    amdgcn      - AMD GCN GPUs
    arc         - ARC
    arm         - ARM
    arm64       - ARM64 (little endian)
    arm64_32    - ARM64 (little endian ILP32)
    armeb       - ARM (big endian)
    avr         - Atmel AVR Microcontroller
    bpf         - BPF (host endian)
    bpfeb       - BPF (big endian)
    bpfel       - BPF (little endian)
    csky        - C-SKY
    hexagon     - Hexagon
    lanai       - Lanai
    loongarch32 - 32-bit LoongArch
    loongarch64 - 64-bit LoongArch
    m68k        - Motorola 68000 family
    mips        - MIPS (32-bit big endian)
    mips64      - MIPS (64-bit big endian)
    mips64el    - MIPS (64-bit little endian)
    mipsel      - MIPS (32-bit little endian)
    msp430      - MSP430 [experimental]
    nvptx       - NVIDIA PTX 32-bit
    nvptx64     - NVIDIA PTX 64-bit
    ppc32       - PowerPC 32
    ppc32le     - PowerPC 32 LE
    ppc64       - PowerPC 64
    ppc64le     - PowerPC 64 LE
    r600        - AMD GPUs HD2XXX-HD6XXX
    riscv32     - 32-bit RISC-V
    riscv64     - 64-bit RISC-V
    sparc       - Sparc
    sparcel     - Sparc LE
    sparcv9     - Sparc V9
    systemz     - SystemZ
    thumb       - Thumb
    thumbeb     - Thumb (big endian)
    ve          - VE
    wasm32      - WebAssembly 32-bit
    wasm64      - WebAssembly 64-bit
    x86         - 32-bit X86: Pentium-Pro and above
    x86-64      - 64-bit X86: EM64T and AMD64
    xcore       - XCore
shibe2 commented 1 year ago

Try this patch.

shibe2 commented 12 months ago

A fix has been committed. Use current master code and see if your issue is fixed.

theoctopusride commented 12 months ago

A fix has been committed. Use current master code and see if your issue is fixed.

Yes, it works! Thank you for your help. I am closing this issue.