keldenl / gpt-llama.cpp

A llama.cpp drop-in replacement for OpenAI's GPT endpoints, allowing GPT-powered apps to run off local llama.cpp models instead of OpenAI.
MIT License
594 stars 66 forks source link

npm error on gpt-llama.cpp #44

Open C0deXG opened 1 year ago

C0deXG commented 1 year ago

npm install ─╯

gpt-llama.cpp@0.2.4 postinstall npm run updateengines && cd InferenceEngine/embeddings/all-mpnet-base-v2 && python -m pip install -r requirements.txt

gpt-llama.cpp@0.2.4 updateengines git submodule foreach git pull

sh: python: command not found npm ERR! code 127 npm ERR! path /Users/khederyusuf/Desktop/llama.cpp/gpt-llama.cpp npm ERR! command failed npm ERR! command sh -c npm run updateengines && cd InferenceEngine/embeddings/all-mpnet-base-v2 && python -m pip install -r requirements.txt

npm ERR! A complete log of this run can be found in: npm ERR! /Users/khederyusuf/.npm/_logs/2023-05-12T10_55_36_676Z-debug-0.log

OfficiallyMelon commented 1 year ago

install Python

alexl83 commented 1 year ago

npm.log I have installed a specific conda environment - still npm install doesn't work - I think it's spawning an uninitialized shell for install

log attached npm.log

(llama.cpp) ┌──(alex㉿moppio)-[~/AI/llama.cpp/gpt-llama.cpp]
└─$ npm clean-install

> gpt-llama.cpp@0.2.6 postinstall
> npm run update-engines && cd InferenceEngine/embeddings/all-mpnet-base-v2 && python -m pip install -r requirements.txt

> gpt-llama.cpp@0.2.6 update-engines
> git submodule foreach git pull &&  npm run ggml-build

> gpt-llama.cpp@0.2.6 ggml-build
> cd InferenceEngine/completion/ggml && mkdir -p build && cd build && cmake .. && make clean && make

CMake Warning:
  Ignoring extra path from command line:

   ".."

CMake Error: The source directory "/home/alex/AI/llama.cpp/gpt-llama.cpp/InferenceEngine/completion/ggml" does not appear to contain CMakeLists.txt.
Specify --help for usage, or press the help button on the CMake GUI.
npm ERR! code 1
npm ERR! path /home/alex/AI/llama.cpp/gpt-llama.cpp
npm ERR! command failed
npm ERR! command sh -c -- npm run update-engines && cd InferenceEngine/embeddings/all-mpnet-base-v2 && python -m pip install -r requirements.txt

npm ERR! A complete log of this run can be found in:
npm ERR!     /home/alex/.npm/_logs/2023-05-13T16_42_02_551Z-debug-0.log

(llama.cpp) ┌──(alex㉿moppio)-[~/AI/llama.cpp/gpt-llama.cpp]

It worked in conda until last update: can you please have a look at it?

keldenl commented 1 year ago

Sorry lots of changes that I should've probably decoupled -- please pull again, all the new inference stuff should be decoupled from npm install now

alexl83 commented 1 year ago

It's working now, thanks! :)