Closed ChenZhao44 closed 1 year ago
Hi @ChenZhao44 , do you mind sharing what version of Khoj you're on? You can check by running khoj --version
. If you're using a pre-release version, you might encounter this error as we've upgraded the GPT4All
dependency to use v2.0.0
, which isn't compatible with the older llama v2 binary.
If you're on the v0.13.0
release, I would recommend deleting the existing binary rm /Users/chenzhao/.cache/gpt4all/llama-2-7b-chat.ggmlv3.q4_0.bin
and restarting Khoj so it downloads again. If you're not on the latest Khoj release, try upgrading like so pip install --upgrade khoj-assistant
. That should make the downloads a little more stable.
@sabaimran Thanks for your reply. My khoj version is v0.13.0
while the default version of gpt4all
installed on my Mac using pip install khoj-assistant
is v1.0.12
. I was trying to fix it by manually installing the latest version but it was still not working.
I find this issue might be related. It may be an issue with my conda installation that runs in Rosetta mode.
This issue was fixed by installing the arm native conda on my Mac.
Thanks for clarifying that using the ARM maybe Conda fixed the issue for you! It'd be a useful reference for other folks who run into the same issue
I am not able to load local models on my M1 MacBook Air.
Error messages are as follows.