cocktailpeanut / dalai

The simplest way to run LLaMA on your local machine
https://cocktailpeanut.github.io/dalai
13.09k stars 1.42k forks source link

Can't install models #396

Open yann2-0 opened 1 year ago

yann2-0 commented 1 year ago

Good morning, When I do the command npx dalai llama install 7B everything is fine but error at the "make" command I think it's the condition of the Makefile line 68 but I'm not sure. As a result the models do not install It's an ubuntu 22.04 VM, node 18.16.0, python3 3.10.6 anyone have an idea?

yann@yann-VirtualBox:~$ exit exit 2 [Error: ENOENT: no such file or directory, rename '/home/yann/dalai/llama/models' -> '/home/yann/dalai/tmp/models'] { errno: -2, code: 'ENOENT', syscall: 'rename', path: '/home/yann/dalai/llama/models', dest: '/home/yann/dalai/tmp/models' } 3 [Error: ENOENT: no such file or directory, lstat '/home/yann/dalai/llama'] { errno: -2, code: 'ENOENT', syscall: 'lstat', path: '/home/yann/dalai/llama' } mkdir /home/yann/dalai/llama try fetching /home/yann/dalai/llama https://github.com/candywrap/llama.cpp.git [E] Pull TypeError: Cannot read properties of null (reading 'split') at new GitConfig (/usr/lib/node_modules/dalai/node_modules/isomorphic-git/index.cjs:1604:30) at GitConfig.from (/usr/lib/node_modules/dalai/node_modules/isomorphic-git/index.cjs:1627:12) at GitConfigManager.get (/usr/lib/node_modules/dalai/node_modules/isomorphic-git/index.cjs:1750:22) at async _getConfig (/usr/lib/node_modules/dalai/node_modules/isomorphic-git/index.cjs:5397:18) at async normalizeAuthorObject (/usr/lib/node_modules/dalai/node_modules/isomorphic-git/index.cjs:5407:19) at async Object.pull (/usr/lib/node_modules/dalai/node_modules/isomorphic-git/index.cjs:11682:20) at async Dalai.add (/usr/lib/node_modules/dalai/index.js:394:7) at async Dalai.install (/usr/lib/node_modules/dalai/index.js:346:5) { caller: 'git.pull' } try cloning /home/yann/dalai/llama https://github.com/candywrap/llama.cpp.git next llama [AsyncFunction: make] make exec: make in /home/yann/dalai/llama make exit yann@yann-VirtualBox:~/dalai/llama$ make I llama.cpp build info: I UNAME_S: Linux I UNAME_P: x86_64 I UNAME_M: x86_64 I CFLAGS: -I. -O3 -DNDEBUG -std=c11 -fPIC -pthread -mavx -mavx2 -msse3 I CXXFLAGS: -I. -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -pthread I LDFLAGS:
I CC: cc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0 I CXX: g++ (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0

cc -I. -O3 -DNDEBUG -std=c11 -fPIC -pthread -mavx -mavx2 -msse3 -c ggml.c -o ggml.o In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:101, from ggml.c:155: ggml.c: In function ‘ggml_vec_dot_f16’: /usr/lib/gcc/x86_64-linux-gnu/11/include/f16cintrin.h:52:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_cvtph_ps’: target specific option mismatch 52 | _mm256_cvtph_ps (m128i A) | ^~~~~~~ ggml.c:915:33: note: called from here 915 | #define GGML_F32Cx8_LOAD(x) _mm256_cvtph_ps(_mm_loadu_si128((m128i )(x))) | ^~~~~~~~~~~~ ggml.c:925:37: note: in expansion of macro ‘GGML_F32Cx8_LOAD’ 925 | #define GGML_F16_VEC_LOAD(p, i) GGML_F32Cx8_LOAD(p) | ^~~~ ggml.c:1319:21: note: in expansion of macro ‘GGML_F16_VEC_LOAD’ 1319 | ay[j] = GGML_F16_VEC_LOAD(y + i + jGGML_F16_EPR, j); | ^~~~~ In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:101, from ggml.c:155: /usr/lib/gcc/x86_64-linux-gnu/11/include/f16cintrin.h:52:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_cvtph_ps’: target specific option mismatch 52 | _mm256_cvtph_ps (m128i A) | ^~~~~~~ ggml.c:915:33: note: called from here 915 | #define GGML_F32Cx8_LOAD(x) _mm256_cvtph_ps(_mm_loadu_si128((m128i )(x))) | ^~~~~~~~~~~~ ggml.c:925:37: note: in expansion of macro ‘GGML_F32Cx8_LOAD’ 925 | #define GGML_F16_VEC_LOAD(p, i) GGML_F32Cx8_LOAD(p) | ^~~~ ggml.c:1318:21: note: in expansion of macro ‘GGML_F16_VEC_LOAD’ 1318 | ax[j] = GGML_F16_VEC_LOAD(x + i + jGGML_F16_EPR, j); | ^~~~~ In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:101, from ggml.c:155: /usr/lib/gcc/x86_64-linux-gnu/11/include/f16cintrin.h:52:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_cvtph_ps’: target specific option mismatch 52 | _mm256_cvtph_ps (m128i A) | ^~~~~~~ ggml.c:915:33: note: called from here 915 | #define GGML_F32Cx8_LOAD(x) _mm256_cvtph_ps(_mm_loadu_si128((m128i )(x))) | ^~~~~~~~~~~~ ggml.c:925:37: note: in expansion of macro ‘GGML_F32Cx8_LOAD’ 925 | #define GGML_F16_VEC_LOAD(p, i) GGML_F32Cx8_LOAD(p) | ^~~~ ggml.c:1318:21: note: in expansion of macro ‘GGML_F16_VEC_LOAD’ 1318 | ax[j] = GGML_F16_VEC_LOAD(x + i + jGGML_F16_EPR, j); | ^~~~~ In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:101, from ggml.c:155: /usr/lib/gcc/x86_64-linux-gnu/11/include/f16cintrin.h:52:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_cvtph_ps’: target specific option mismatch 52 | _mm256_cvtph_ps (m128i A) | ^~~~~~~ ggml.c:915:33: note: called from here 915 | #define GGML_F32Cx8_LOAD(x) _mm256_cvtph_ps(_mm_loadu_si128((m128i )(x))) | ^~~~~~~~~~~~ ggml.c:925:37: note: in expansion of macro ‘GGML_F32Cx8_LOAD’ 925 | #define GGML_F16_VEC_LOAD(p, i) GGML_F32Cx8_LOAD(p) | ^~~~ ggml.c:1319:21: note: in expansion of macro ‘GGML_F16_VEC_LOAD’ 1319 | ay[j] = GGML_F16_VEC_LOAD(y + i + jGGML_F16_EPR, j); | ^~~~~ In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:101, from ggml.c:155: /usr/lib/gcc/x86_64-linux-gnu/11/include/f16cintrin.h:52:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_cvtph_ps’: target specific option mismatch 52 | _mm256_cvtph_ps (m128i A) | ^~~~~~~ ggml.c:915:33: note: called from here 915 | #define GGML_F32Cx8_LOAD(x) _mm256_cvtph_ps(_mm_loadu_si128((m128i )(x))) | ^~~~~~~~~~~~ ggml.c:925:37: note: in expansion of macro ‘GGML_F32Cx8_LOAD’ 925 | #define GGML_F16_VEC_LOAD(p, i) GGML_F32Cx8_LOAD(p) | ^~~~ ggml.c:1318:21: note: in expansion of macro ‘GGML_F16_VEC_LOAD’ 1318 | ax[j] = GGML_F16_VEC_LOAD(x + i + jGGML_F16_EPR, j); | ^~~~~ In file included from /usr/lib/gcc/x86_64-linux-gnu/11/include/immintrin.h:101, from ggml.c:155: /usr/lib/gcc/x86_64-linux-gnu/11/include/f16cintrin.h:52:1: error: inlining failed in call to ‘always_inline’ ‘_mm256_cvtph_ps’: target specific option mismatch 52 | _mm256_cvtph_ps (m128i A) | ^~~~~~~ ggml.c:915:33: note: called from here 915 | #define GGML_F32Cx8_LOAD(x) _mm256_cvtph_ps(_mm_loadu_si128((m128i )(x))) | ^~~~~~~~~~~~ ggml.c:925:37: note: in expansion of macro ‘GGML_F32Cx8_LOAD’ 925 | #define GGML_F16_VEC_LOAD(p, i) GGML_F32Cx8_LOAD(p) | ^~~~ ggml.c:1319:21: note: in expansion of macro ‘GGML_F16_VEC_LOAD’ 1319 | ay[j] = GGML_F16_VEC_LOAD(y + i + jGGML_F16_EPR, j); | ^~~~~ make: *** [Makefile:221 : ggml.o] Erreur 1 yann@yann-VirtualBox:~/dalai/llama$ exit exit ERROR Error: running 'make' failed at LLaMA.make (/usr/lib/node_modules/dalai/llama.js:50:15) at async Dalai.add (/usr/lib/node_modules/dalai/index.js:412:5) at async Dalai.install (/usr/lib/node_modules/dalai/index.js:346:5)

MrAnayDongre commented 1 year ago

For alpaca 7B model you can follow the link. This is the only direct download I could find.

Nocturna22 commented 1 year ago

Download the Files manually and place it like this: grafik

Try these: alpaca7B, alpaca13B