eosphoros-ai / DB-GPT

AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents
http://docs.dbgpt.cn
MIT License
13.85k stars 1.86k forks source link

[Bug] [ChatExcel] NotImplementedError: The operator 'aten::isin.Tensor_Tensor_out' is not currently implemented for the MPS device. #1838

Open guojie0701 opened 3 months ago

guojie0701 commented 3 months ago

Search before asking

Operating system information

MacOS(M1, M2...)

Python version information

3.10

DB-GPT version

main

Related scenes

Installation Information

Device information

macbookpro m1 RAM 64G

Models information

LLM: glm-4-9b-chat embeddingmodel: text2vec-large-chinese

What happened

2024-08-17 21:08:57 guojie dbgpt.model.llm_out.hf_chat_llm[4218] INFO Predict with parameters: {'max_length': 128000, 'temperature': 0.8, 'streamer': <transformers.generation.streamers.TextIteratorStreamer object at 0x17642d540>, 'top_p': 1.0, 'do_sample': True} custom_stop_words: [] Exception in thread Thread-7 (generate): Traceback (most recent call last): File "/Users/guojie/miniconda3/envs/dbgpt0510/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/Users/guojie/miniconda3/envs/dbgpt0510/lib/python3.10/threading.py", line 953, in run self._target(*self._args, *self._kwargs) File "/Users/guojie/miniconda3/envs/dbgpt0510/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(args, **kwargs) File "/Users/guojie/miniconda3/envs/dbgpt0510/lib/python3.10/site-packages/transformers/generation/utils.py", line 1713, in generate self._prepare_special_tokens(generation_config, kwargs_has_attention_mask, device=device) File "/Users/guojie/miniconda3/envs/dbgpt0510/lib/python3.10/site-packages/transformers/generation/utils.py", line 1562, in _prepare_special_tokens and torch.isin(elements=eos_token_tensor, test_elements=pad_token_tensor).any() NotImplementedError: The operator 'aten::isin.Tensor_Tensor_out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable PYTORCH_ENABLE_MPS_FALLBACK=1 to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.

What you expected to happen

what should I do to fix this problem , if I want to chat with excel? Thanks very much

How to reproduce

just install DB-GPT on macbookpro m1, and then use chat excel

Additional context

No response

Are you willing to submit PR?

github-actions[bot] commented 2 months ago

This issue has been marked as stale, because it has been over 30 days without any activity.