Mintplex-Labs / anything-llm

The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, and more.
https://anythingllm.com
MIT License
24.03k stars 2.41k forks source link

default.metallib not found #478

Closed cracksauce closed 9 months ago

cracksauce commented 9 months ago

After several re-installs, trying out different models, no matter what I do, my system can't seem to use local LLMs. It crashes every time and gives me the same message each time. Super aggravating since I'd love to use this tool locally. Help debugging/troubleshooting would be appreciated- I'm having trouble trying to tinker with the llama-cpp node module.

Physical (or virtual) hardware you are using: Mac Mini 2018 with 6-core intel i7, 16 GB RAM

Operating System: macOS Ventura 13.6.1

SDK: Node.js version 10.2.5

Running via repo as recommend on reddit here

`yarn dev:server

yarn run v1.22.21 $ cd server && yarn dev $ NODE_ENV=development nodemon --ignore documents --ignore vector-cache --ignore storage --ignore swagger --trace-warnings index.js [nodemon] 2.0.22 [nodemon] to restart at any time, enter rs [nodemon] watching path(s): . [nodemon] watching extensions: js,mjs,json [nodemon] starting node --trace-warnings index.js [TELEMETRY STUBBED] Anonymous Telemetry stubbed in development. Primary server listening on port 3001 $ node ./swagger/init.js Swagger-autogen: Success ✔ prisma:info Starting a sqlite pool with 13 connections. llama_model_loader: loaded meta data with 20 key-value pairs and 201 tensors from /Users/path-to-model/tinyllama-2-1b-miniguanaco.Q5_K_M.gguf (version GGUF V2) llama_model_loader: - tensor 0: token_embd.weight q5_K [ 2048, 32003, 1, 1 ] llama_model_loader: - tensor 1: blk.0.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 2: blk.0.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 3: blk.0.attn_v.weight q6_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 4: blk.0.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 5: blk.0.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 6: blk.0.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 7: blk.0.ffn_down.weight q6_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 8: blk.0.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 9: blk.0.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 10: blk.1.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 11: blk.1.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 12: blk.1.attn_v.weight q6_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 13: blk.1.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 14: blk.1.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 15: blk.1.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 16: blk.1.ffn_down.weight q6_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 17: blk.1.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 18: blk.1.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 19: blk.2.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 20: blk.2.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 21: blk.2.attn_v.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 22: blk.2.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 23: blk.2.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 24: blk.2.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 25: blk.2.ffn_down.weight q5_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 26: blk.2.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 27: blk.2.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 28: blk.3.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 29: blk.3.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 30: blk.3.attn_v.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 31: blk.3.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 32: blk.3.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 33: blk.3.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 34: blk.3.ffn_down.weight q5_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 35: blk.3.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 36: blk.3.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 37: blk.4.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 38: blk.4.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 39: blk.4.attn_v.weight q6_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 40: blk.4.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 41: blk.4.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 42: blk.4.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 43: blk.4.ffn_down.weight q6_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 44: blk.4.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 45: blk.4.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 46: blk.5.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 47: blk.5.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 48: blk.5.attn_v.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 49: blk.5.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 50: blk.5.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 51: blk.5.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 52: blk.5.ffn_down.weight q5_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 53: blk.5.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 54: blk.5.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 55: blk.6.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 56: blk.6.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 57: blk.6.attn_v.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 58: blk.6.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 59: blk.6.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 60: blk.6.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 61: blk.6.ffn_down.weight q5_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 62: blk.6.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 63: blk.6.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 64: blk.7.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 65: blk.7.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 66: blk.7.attn_v.weight q6_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 67: blk.7.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 68: blk.7.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 69: blk.7.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 70: blk.7.ffn_down.weight q6_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 71: blk.7.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 72: blk.7.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 73: blk.8.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 74: blk.8.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 75: blk.8.attn_v.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 76: blk.8.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 77: blk.8.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 78: blk.8.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 79: blk.8.ffn_down.weight q5_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 80: blk.8.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 81: blk.8.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 82: blk.9.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 83: blk.9.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 84: blk.9.attn_v.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 85: blk.9.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 86: blk.9.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 87: blk.9.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 88: blk.9.ffn_down.weight q5_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 89: blk.9.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 90: blk.9.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 91: blk.10.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 92: blk.10.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 93: blk.10.attn_v.weight q6_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 94: blk.10.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 95: blk.10.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 96: blk.10.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 97: blk.10.ffn_down.weight q6_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 98: blk.10.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 99: blk.10.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 100: blk.11.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 101: blk.11.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 102: blk.11.attn_v.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 103: blk.11.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 104: blk.11.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 105: blk.11.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 106: blk.11.ffn_down.weight q5_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 107: blk.11.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 108: blk.11.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 109: blk.12.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 110: blk.12.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 111: blk.12.attn_v.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 112: blk.12.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 113: blk.12.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 114: blk.12.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 115: blk.12.ffn_down.weight q5_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 116: blk.12.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 117: blk.12.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 118: blk.13.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 119: blk.13.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 120: blk.13.attn_v.weight q6_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 121: blk.13.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 122: blk.13.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 123: blk.13.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 124: blk.13.ffn_down.weight q6_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 125: blk.13.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 126: blk.13.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 127: blk.14.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 128: blk.14.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 129: blk.14.attn_v.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 130: blk.14.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 131: blk.14.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 132: blk.14.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 133: blk.14.ffn_down.weight q5_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 134: blk.14.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 135: blk.14.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 136: blk.15.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 137: blk.15.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 138: blk.15.attn_v.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 139: blk.15.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 140: blk.15.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 141: blk.15.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 142: blk.15.ffn_down.weight q5_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 143: blk.15.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 144: blk.15.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 145: blk.16.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 146: blk.16.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 147: blk.16.attn_v.weight q6_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 148: blk.16.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 149: blk.16.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 150: blk.16.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 151: blk.16.ffn_down.weight q6_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 152: blk.16.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 153: blk.16.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 154: blk.17.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 155: blk.17.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 156: blk.17.attn_v.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 157: blk.17.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 158: blk.17.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 159: blk.17.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 160: blk.17.ffn_down.weight q5_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 161: blk.17.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 162: blk.17.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 163: blk.18.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 164: blk.18.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 165: blk.18.attn_v.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 166: blk.18.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 167: blk.18.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 168: blk.18.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 169: blk.18.ffn_down.weight q5_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 170: blk.18.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 171: blk.18.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 172: blk.19.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 173: blk.19.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 174: blk.19.attn_v.weight q6_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 175: blk.19.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 176: blk.19.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 177: blk.19.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 178: blk.19.ffn_down.weight q6_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 179: blk.19.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 180: blk.19.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 181: blk.20.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 182: blk.20.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 183: blk.20.attn_v.weight q6_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 184: blk.20.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 185: blk.20.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 186: blk.20.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 187: blk.20.ffn_down.weight q6_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 188: blk.20.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 189: blk.20.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 190: blk.21.attn_q.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 191: blk.21.attn_k.weight q5_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 192: blk.21.attn_v.weight q6_K [ 2048, 256, 1, 1 ] llama_model_loader: - tensor 193: blk.21.attn_output.weight q5_K [ 2048, 2048, 1, 1 ] llama_model_loader: - tensor 194: blk.21.ffn_gate.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 195: blk.21.ffn_up.weight q5_K [ 2048, 5632, 1, 1 ] llama_model_loader: - tensor 196: blk.21.ffn_down.weight q6_K [ 5632, 2048, 1, 1 ] llama_model_loader: - tensor 197: blk.21.attn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 198: blk.21.ffn_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 199: output_norm.weight f32 [ 2048, 1, 1, 1 ] llama_model_loader: - tensor 200: output.weight q6_K [ 2048, 32003, 1, 1 ] llama_model_loader: - kv 0: general.architecture str
llama_model_loader: - kv 1: general.name str
llama_model_loader: - kv 2: llama.context_length u32
llama_model_loader: - kv 3: llama.embedding_length u32
llama_model_loader: - kv 4: llama.block_count u32
llama_model_loader: - kv 5: llama.feed_forward_length u32
llama_model_loader: - kv 6: llama.rope.dimension_count u32
llama_model_loader: - kv 7: llama.attention.head_count u32
llama_model_loader: - kv 8: llama.attention.head_count_kv u32
llama_model_loader: - kv 9: llama.attention.layer_norm_rms_epsilon f32
llama_model_loader: - kv 10: llama.rope.freq_base f32
llama_model_loader: - kv 11: general.file_type u32
llama_model_loader: - kv 12: tokenizer.ggml.model str
llama_model_loader: - kv 13: tokenizer.ggml.tokens arr
llama_model_loader: - kv 14: tokenizer.ggml.scores arr
llama_model_loader: - kv 15: tokenizer.ggml.token_type arr
llama_model_loader: - kv 16: tokenizer.ggml.bos_token_id u32
llama_model_loader: - kv 17: tokenizer.ggml.eos_token_id u32
llama_model_loader: - kv 18: tokenizer.ggml.unknown_token_id u32
llama_model_loader: - kv 19: general.quantization_version u32
llama_model_loader: - type f32: 45 tensors llama_model_loader: - type q5_K: 135 tensors llama_model_loader: - type q6_K: 21 tensors llm_load_vocab: special tokens definition check successful ( 262/32003 ). llm_load_print_meta: format = GGUF V2 llm_load_print_meta: arch = llama llm_load_print_meta: vocab type = SPM llm_load_print_meta: n_vocab = 32003 llm_load_print_meta: n_merges = 0 llm_load_print_meta: n_ctx_train = 2048 llm_load_print_meta: n_embd = 2048 llm_load_print_meta: n_head = 32 llm_load_print_meta: n_head_kv = 4 llm_load_print_meta: n_layer = 22 llm_load_print_meta: n_rot = 64 llm_load_print_meta: n_gqa = 8 llm_load_print_meta: f_norm_eps = 0.0e+00 llm_load_print_meta: f_norm_rms_eps = 1.0e-05 llm_load_print_meta: f_clamp_kqv = 0.0e+00 llm_load_print_meta: f_max_alibi_bias = 0.0e+00 llm_load_print_meta: n_ff = 5632 llm_load_print_meta: rope scaling = linear llm_load_print_meta: freq_base_train = 10000.0 llm_load_print_meta: freq_scale_train = 1 llm_load_print_meta: n_yarn_orig_ctx = 2048 llm_load_print_meta: rope_finetuned = unknown llm_load_print_meta: model type = ?B llm_load_print_meta: model ftype = mostly Q5_K - Medium llm_load_print_meta: model params = 1.10 B llm_load_print_meta: model size = 745.12 MiB (5.68 BPW) llm_load_print_meta: general.name = abdgrt_tinyllama-2-1b-miniguanaco llm_load_print_meta: BOS token = 1 '' llm_load_print_meta: EOS token = 2 '' llm_load_print_meta: UNK token = 0 '' llm_load_print_meta: LF token = 13 '<0x0A>' llm_load_tensors: ggml ctx size = 0.07 MB llm_load_tensors: mem required = 745.20 MB ...................................................................................... llama_new_context_with_model: n_ctx = 4096 llama_new_context_with_model: freq_base = 10000.0 llama_new_context_with_model: freq_scale = 1 llama_new_context_with_model: kv self size = 88.00 MB llama_build_graph: non-view tensors processed: 510/510 ggml_metal_init: allocating ggml_metal_init: found device: AMD Radeon RX 6800 ggml_metal_init: found device: Intel(R) UHD Graphics 630 ggml_metal_init: picking default device: AMD Radeon RX 6800 ggml_metal_init: default.metallib not found, loading from source [nodemon] app crashed - waiting for file changes before starting... `

timothycarambat commented 9 months ago

This is why LocalLLM support is experimental. This error is related to node-llama-cpp and not AnythingLLM, however because we cannot force the build at runtime you should do the following: cd server && npx --no node-llama-cpp build --no-metal to build the binaries needed for MacOS intel.

Node-llama-cpp by default builds for Apple Metal chips and error seems related to that