Closed metaskills closed 3 months ago
I'm a Windows user. Based on the advice from github.com/Mozilla-Ocho/llamafile/discussions/418 and the explanations in the readme, I successfully packaged gemm2-2b-it-q4_k_m.gguf!
But, the program crashed after running for a while, experiencing the following error.
GGML_ASSERT: llama.cpp/ggml.c:13480: n_dims <= ne0
error: Uncaught SIGABRT (SI_TKILL) at 0 on DESKTOP-40O76F4 pid 448 tid 8996 gemm2-2-2b-it-Q4_k_m.llamafile.exe No error information Windows Cosmopolitan 3.6.2 MODE=x86_64 DESKTOP-40O76F4 10.0
RAX 00000000009de700 RBX 0000000000000006 RDI 000025c199be3930 RCX 000021c73441d090 RDX 0000000000000000 RSI 00000000fffffffa RBP 000025c199be3c80 RSP 000025c199be3810 RIP 0000000000414c22 R8 0000000000000000 R9 000025c199be3770 R10 0000000000000000 R11 0000000000000246 R12 0000000000000120 R13 0000000000000003 R14 0000000000000004 R15 00003de44a6b0350 TLS 000021c73441d040
XMM0 00000000000000000000000000000000 XMM8 00000000000000000000000000000000 XMM1 00000000000000000000000000000000 XMM9 00000000000000000000000000000000 XMM2 00000000000000000000000000000000 XMM10 00000000000000000000000000000000 XMM3 00000000000000000000000000000000 XMM11 00000000000000000000000000000000 XMM4 00000000000000000000000000000000 XMM12 00000000000000000000000000000000 XMM5 00000000000000000000000000000000 XMM13 00000000000000000000000000000000 XMM6 00000000000000000000000000000000 XMM14 00000000000000000000000000000000 XMM7 00000000000000000000000000000000 XMM15 00000000000000000000000000000000
GGML_ASSERT: llama.cpp/ggml.c:13480: n_dims <= ne0 cosmoaddr2line /C/Users/pondahai/Downloads/lm-models/lmstudio-community/gemma-2-2b-it-GGUF/gemm2-2-2b-it-Q4_k_m.llamafile.exe 414c22 8e37de 40db6a 545aa6 547116 54a62c 54ad23 8ce257 8eee79
GGML_ASSERT: llama.cpp/ggml.c:13480: n_dims <= ne0 GGML_ASSERT: llama.cpp/ggml.c:13480: n_dims <= ne0 25c199be1310 414c1d __sig_raise+45 25c199be3c80 8e37de raise+78 25c199be3ca0 40db6a abort+40 25c199be3cc0 545aa6 ggml_compute_forward_rope_f16+845 25c199be3e90 547116 ggml_compute_forward_rope+29 25c199be3ea0 54a62c ggml_compute_forward+696 25c199be3ed0 54ad23 ggml_graph_compute_thread+960 25c199be3f60 8ce257 PosixThread+135 25c199be3fb0 8eee79 __stack_call+16
000000260000-000000270000 rw-Pa 64kb hand=328 000000400000-0000009dd1f8 r-x-- 6004kb 0000009de000-000000a74000 rw--- 600kb 0006fe000000-0006fe010000 rw-pa 64kb hand=332 007329ae0000-00732cef0000 rw-pa 52mb hand=1948 016b4ac20000-016b4af30000 rw-pa 3136kb hand=2152 04f558d40000-04f558d50000 rw-pa 64kb hand=1084 04f558d50000-04f558d60000 rw-pa 64kb hand=1288 04f558d60000-04f558d70000 rw-pa 64kb hand=1292 04f558d70000-04f558d80000 rw-pa 64kb hand=1296 04f558d80000-04f558d90000 rw-pa 64kb hand=1300 04f558d90000-04f558da0000 rw-pa 64kb hand=1304 04f558da0000-04f558db0000 rw-pa 64kb hand=1308 04f558db0000-04f558dc0000 rw-pa 64kb hand=1312 04f558dc0000-04f558dd0000 rw-pa 64kb hand=1316
I have packaged the instruct model here: https://huggingface.co/kevinbayes/gemma2-2b_it_v2.llamafile
Awesome @kevinbayes. I also uploaded it here https://huggingface.co/jartine/gemma-2-2b-it-llamafile earlier this morning.
Prerequisites
Feature Description
Gemma 2 2B Released Today
Motivation
Just released today, seems a perfect fit for a Llamafile and would fit nicely with the other Gemma 2 models released.
Possible Implementation
No response