Closed brianjking closed 11 months ago
Alright, it seems that using Conda didn't sit well with you, or there might have been another issue. Regardless, adhering to the instructions meticulously proved to be effective. Switching back to the pyenv option, even though both utilize Python 3.11, rectified the situation.
Thanks!
I use pyenv to instal and set python 3.11 and have the same error when running local. Any idea? imac intel
same error in two mac (intel and m1)
same here, Mac Intel, couldn't figure out what happened. looking forward to the solution.
same here, Mac Intel as well...
same here (mac intel)
Same here (Mac intel)
Also the same here (Mac intel)
it works with cpu CMAKE_ARGS="-DLLAMA_METAL=off" pip install --force-reinstall --no-cache-dir llama-cpp-python
but not with gpu.
Have there been any updates on this? I am having the same issue (Mac Intel)
Hello,
Great work, thank you! To reiterate:
Machine Details
Steps to Reproduce
1.
git clone https://github.com/imartinez/privateGPT
poetry install --with ui,local
poetry run python scripts/setup
CMAKE_ARGS="-DLLAMA_METAL=on" pip install --force-reinstall --no-cache-dir llama-cpp-python
PGPT_PROFILES=local make run
--> This is where the errors are fromI'm able to use the OpenAI version by using
PGPT_PROFILES=openai make run
I use both Llama 2 and Mistral 7b and other variants via LMStudio and via Simon's llm tool, so I'm not sure why the metal failure is occurring.
I installed via https://docs.privategpt.dev/#section/Introduction
Once I run
PGPT_PROFILES=local make run
I get this error: