PromtEngineer / localGPT

Chat with your documents on your local device using GPT models. No data leaves your device and 100% private.
Apache License 2.0
20.06k stars 2.24k forks source link

Error on run_localGPT.py --device_type mps #608

Open Fluxkom opened 1 year ago

Fluxkom commented 1 year ago

Enter a query: Tell me about Orcas ggml_metal_graph_compute: command buffer 0 failed with status 5 GGML_ASSERT: /private/var/folders/qs/yhqls5xn3c3fdg5tl00tzfnm0000gn/T/pip-install-ulcd5sl7/llama-cpp-python_3c33c62f26e243acb2c2fa6c61e5606d/vendor/llama.cpp/ggml-metal.m:1185: false zsh: abort python run_localGPT.py --device_type mps stefan@MBFVFFG677Q05G localGPT % /Users/stefan/.pyenv/versions/3.10.13/lib/python3.10/multiprocessing/resource_tracker.py:224: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d '

It seems to try to run the gguf model with ggml settings

TechInnovate01 commented 11 months ago

Same error after using updated API and latest code.

CanCanNeed-pro commented 10 months ago

I have the same problem, I use the M1 chip

Advik29 commented 10 months ago

Im facing the same issue..the exact error cant be understood

gregorioosorio commented 9 months ago

Same problem here.

Anyone have been able to work this out? Thanks!

arioboo commented 8 months ago

Same here with same M1 setup

machineska commented 7 months ago

Same issue here with M1 setup