hiyouga / LLaMA-Factory

Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)
https://arxiv.org/abs/2403.13372
Apache License 2.0
31.13k stars 3.84k forks source link

NPU glm-4-9b-chat API推理报错 #4165

Closed msqp closed 3 months ago

msqp commented 3 months ago

Reminder

System Info

torch-npu=2.1.0.post3 torch=2.1.0 Ascend-cann-toolkit_8.0.RC1_linux-aarch64 Ascend-cann-kernels-910b_8.0.RC1_linux

Reproduction

INFO: "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 399, in run_asgi result = await app( # type: ignore[func-returns-value] File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in call return await self.app(scope, receive, send) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in call await super().call(scope, receive, send) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/starlette/applications.py", line 123, in call await self.middleware_stack(scope, receive, send) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in call raise exc File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in call await self.app(scope, receive, _send) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/starlette/middleware/cors.py", line 85, in call await self.app(scope, receive, send) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 65, in call await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/starlette/routing.py", line 756, in call await self.middleware_stack(scope, receive, send) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/starlette/routing.py", line 776, in app await route.handle(scope, receive, send) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/starlette/routing.py", line 297, in handle await self.app(scope, receive, send) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/starlette/routing.py", line 77, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/starlette/routing.py", line 72, in app response = await func(request) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/fastapi/routing.py", line 278, in app raw_response = await run_endpoint_function( File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/fastapi/routing.py", line 191, in run_endpoint_function return await dependant.call(values) File "LLaMA-Factory/src/llamafactory/api/app.py", line 85, in create_chat_completion return await create_chat_completion_response(request, chat_model) File "LLaMA-Factory/src/llamafactory/api/chat.py", line 132, in create_chat_completion_response responses = await chat_model.achat( File "LLaMA-Factory/src/llamafactory/chat/chat_model.py", line 56, in achat return await self.engine.chat(messages, system, tools, image, input_kwargs) File "LLaMA-Factory/src/llamafactory/chat/hf_engine.py", line 277, in chat return await loop.run_in_executor(pool, self._chat, input_args) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(self.args, self.kwargs) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "LLaMA-Factory/src/llamafactory/chat/hf_engine.py", line 167, in _chat generate_output = model.generate(gen_kwargs) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, kwargs) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/transformers/generation/utils.py", line 1758, in generate result = self._sample( File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/transformers/generation/utils.py", line 2397, in _sample outputs = self( File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, *kwargs) File "hf_cache/modules/transformers_modules/glm4/modeling_chatglm.py", line 878, in forward transformer_outputs = self.transformer( File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(args, kwargs) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(*args, kwargs) File "hf_cache/modules/transformers_modules/glm4/modeling_chatglm.py", line 760, in forward inputs_embeds = self.embedding(input_ids) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, kwargs) File "hf_cache/modules/transformers_modules/glm4/modeling_chatglm.py", line 698, in forward words_embeddings = self.word_embeddings(input_ids) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl return forward_call(args, **kwargs) File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/torch/nn/modules/sparse.py", line 162, in forward return F.embedding( File "miniconda3/envs/new_llama_factory_py3.10/lib/python3.10/site-packages/torch/nn/functional.py", line 2233, in embedding return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) RuntimeError: call aclnnEmbedding failed, detail:EZ1001: 2024-06-08-21:30:37.466.137 weight not implemented for DT_BFLOAT16, should be in dtype support list [DT_DOUBLE,DT_FLOAT,DT_FLOAT16,DT_INT64,DT_INT32,DT_INT16,DT_INT8,DT_UINT8,DT_BOOL,DT_COMPLEX128,DT_COMPLEX64,].

Expected behavior

No response

Others

No response

1737686924 commented 3 months ago

+1,昇腾训练报错,RuntimeError: call aclnnCast failed, detail:EZ1001: 2024-06-10-00:01:59.325.061 self not implemented for DT_BFLOAT16, should be in dtype support list [DT_FLOAT16,DT_FLOAT,DT_DOUBLE,DT_INT8,DT_UINT8,DT_INT16,DT_INT32,DT_INT64,DT_UINT16,DT_UINT32,DT_UINT64,DT_BOOL,DT_COMPLEX64,DT_COMPLEX128,].

hiyouga commented 3 months ago

cc @statelesshz

zhoushaoxiang commented 3 months ago

遇到同样的问题,求解决: File "/home/ma-user/.cache/huggingface/modules/transformers_modules/glm-4-9b-chat/modeling_chatglm.py", line 86, in init inv_freq = 1.0 / (10000 ** (torch.arange(0, dim, 2, device=device).to(dtype=dtype) / dim)) RuntimeError: call aclnnCast failed, detail:EZ1001: The param dtype not implemented for DT_BFLOAT16, should be in dtype support list [DT_FLOAT16,DT_FLOAT,DT_DOUBLE,DT_INT8,DT_UINT8,DT_INT16,DT_INT32,DT_INT64,DT_BOOL,DT_COMPLEX64,DT_COMPLEX128,].

zhoushaoxiang commented 3 months ago

+1,昇腾训练报错,RuntimeError: call aclnnCast failed, detail:EZ1001: 2024-06-10-00:01:59.325.061 self not implemented for DT_BFLOAT16, should be in dtype support list [DT_FLOAT16,DT_FLOAT,DT_DOUBLE,DT_INT8,DT_UINT8,DT_INT16,DT_INT32,DT_INT64,DT_UINT16,DT_UINT32,DT_UINT64,DT_BOOL,DT_COMPLEX64,DT_COMPLEX128,].

请问你问题解决了吗

hiyouga commented 3 months ago

see https://github.com/hiyouga/LLaMA-Factory/issues/4339