Open jeremymcmullin opened 2 months ago
This link is blocked by bitly. Why not just use the original URL?
Seems like I got lucky: bit.ly's system flagged that last link as suspicious and I didn't get to the destination. Clever social engineering writing a comment with it that referenced "changing the compiler" making it seem like it was relevant (there's a compile step I went through in Step 3). The comment is gone which may have been GitHub (?) and the profile is also returning a 404 now. Reminder to NOT click links. Any real people here who can help?
According to this GH issue on the gpt4all project: https://github.com/nomic-ai/gpt4all/issues/2744#issuecomment-2256296921
You are using an Intel x86_64 build of Python, which runs in Rosetta and does not support AVX instructions. ... This is not a supported use of the GPT4All Python binding. Please install in an environment that uses an arm64 build of Python.
So I guess this is me screwed. file "$(which python)"
at the terminal reports "Mach-O 64-bit executable x86_64" rather than the arm version.
Same thing happened to me. brew uninstall llm
+ brew install llm
reset things for me, because even undoing the gpt4all install with llm uninstall llm-gpt4all -y
crashes.
Yeah, same here. Had to go to brew uninstall as no other llm commands, including uninstall would work. Worthwhile including in docs for users on i386/intel as a pre-req and warning @simonw
Background / context
What I did
brew install llm
llm install llm-llama-cpp
llm install llama-cpp-python
llm models
(success: got 11 listed, all OpenAI)llm install llm-gpt4all
(I didn't do this in a virtual Env btw, just at the command line in terminal)Among other things I got this from the terminal:
llm models
(but this time got the error below)No
llm
related commands seem to work e.g. llm --help (I always get some tracback error) Next will be try and uninstall llm via HomeBrew.....but I've not gone there yet. No idea if that will work anyway. And I wanted to see if the community here could help first. :)