netease-youdao / QAnything

Question and Answer based on Anything.
https://qanything.ai
Apache License 2.0
10.81k stars 1.04k forks source link

[BUG] llama_new_context_with_model: failed to initialize Metal backend #256

Open liuchunming033 opened 3 months ago

liuchunming033 commented 3 months ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

QAnything % bash scripts/run_for_3B_in_M1_mac.sh 时会报错 program_source:9022:9: error: invalid type 'const constant int64_t &' (aka 'const constant long &') for buffer declaration constant int64_t & ne1, ^~~~~~ program_source:9022:22: note: type 'int64_t' (aka 'long') cannot be used in buffer pointee type constant int64_t & ne1, ^ program_source:9023:9: error: invalid type 'const constant uint64_t &' (aka 'const constant unsigned long &') for buffer declaration constant uint64_t & nb1, ^~~~~~ program_source:9023:21: note: type 'uint64_t' (aka 'unsigned long') cannot be used in buffer pointee type constant uint64_t & nb1, ^ } llama_new_context_with_model: failed to initialize Metal backend [2024-04-15 20:20:44 +0800] [95102] [ERROR] Experienced exception while trying to serve ... [2024-04-15 20:20:44 +0800] [95102] [INFO] Server Stopped Traceback (most recent call last): File "/Users/chunming.liu/opt/anaconda3/envs/qanything-python/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/Users/chunming.liu/opt/anaconda3/envs/qanything-python/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/Users/chunming.liu/work/QAnything/qanything_kernel/qanything_server/sanic_api.py", line 207, in app.run(host=args.host, port=args.port, single_process=True, access_log=False) File "/Users/chunming.liu/opt/anaconda3/envs/qanything-python/lib/python3.10/site-packages/sanic/mixins/startup.py", line 215, in run serve(primary=self) # type: ignore File "/Users/chunming.liu/opt/anaconda3/envs/qanything-python/lib/python3.10/site-packages/sanic/mixins/startup.py", line 958, in serve_single worker_serve(monitor_publisher=None, **kwargs) File "/Users/chunming.liu/opt/anaconda3/envs/qanything-python/lib/python3.10/site-packages/sanic/worker/serve.py", line 143, in worker_serve raise e File "/Users/chunming.liu/opt/anaconda3/envs/qanything-python/lib/python3.10/site-packages/sanic/worker/serve.py", line 117, in worker_serve return _serve_http_1( File "/Users/chunming.liu/opt/anaconda3/envs/qanything-python/lib/python3.10/site-packages/sanic/server/runners.py", line 223, in _serve_http_1 loop.run_until_complete(app._server_event("init", "before")) File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete File "/Users/chunming.liu/opt/anaconda3/envs/qanything-python/lib/python3.10/site-packages/sanic/app.py", line 1764, in _server_event await self.dispatch( File "/Users/chunming.liu/opt/anaconda3/envs/qanything-python/lib/python3.10/site-packages/sanic/signals.py", line 208, in dispatch return await dispatch File "/Users/chunming.liu/opt/anaconda3/envs/qanything-python/lib/python3.10/site-packages/sanic/signals.py", line 183, in _dispatch raise e File "/Users/chunming.liu/opt/anaconda3/envs/qanything-python/lib/python3.10/site-packages/sanic/signals.py", line 167, in _dispatch retval = await maybe_coroutine File "/Users/chunming.liu/opt/anaconda3/envs/qanything-python/lib/python3.10/site-packages/sanic/app.py", line 1315, in _listener await maybe_coro File "/Users/chunming.liu/work/QAnything/qanything_kernel/qanything_server/sanic_api.py", line 177, in init_local_doc_qa local_doc_qa.init_cfg(args=args) File "/Users/chunming.liu/work/QAnything/qanything_kernel/core/local_doc_qa.py", line 64, in init_cfg self.llm: LlamaCPPCustomLLM = LlamaCPPCustomLLM(args) File "/Users/chunming.liu/work/QAnything/qanything_kernel/connector/llm/llm_for_llamacpp.py", line 25, in init self.llm = Llama( File "/Users/chunming.liu/opt/anaconda3/envs/qanything-python/lib/python3.10/site-packages/llama_cpp/llama.py", line 337, in init self._ctx = _LlamaContext( File "/Users/chunming.liu/opt/anaconda3/envs/qanything-python/lib/python3.10/site-packages/llama_cpp/_internals.py", line 265, in init raise ValueError("Failed to create llama_context") ValueError: Failed to create llama_context

期望行为 | Expected Behavior

No response

运行环境 | Environment

- OS:
- NVIDIA Driver:
- CUDA:
- docker:
- docker-compose:
- NVIDIA GPU:
- NVIDIA GPU Memory:

QAnything日志 | QAnything logs

No response

复现方法 | Steps To Reproduce

No response

备注 | Anything else?

No response

xixihahaliu commented 3 months ago

The key error message on this line: llama_new_context_with_model: failed to initialize Metal backend, may I ask what device you are using? Is it an Intel version of Mac? Currently only M1 series chip Mac is supported.