运行demo.py后报错:“RuntimeError: (NotFound) The kernel with key (GPU, NCHW, float16) of kernel `multinomial` is not registered and fail to fallback to CPU one.” #75
在百度的云环境(BML CoderLab)中运行报错:
(demo.py本地加入注释,所以行数显示不对,路径为get_knowledge_based_answer->knowledge_chainknowledge_chain)
raceback (most recent call last):
File "/home/aistudio/LangChain-ChatGLM-Webui/paddlepaddle/demo.py", line 123, in
resp = get_knowledge_based_answer(
File "/home/aistudio/LangChain-ChatGLM-Webui/paddlepaddle/demo.py", line 77, in get_knowledge_based_answer
result = knowledge_chain({"query": query})
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/base.py", line 116, in call
raise e
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/base.py", line 113, in call
outputs = self._call(inputs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/retrieval_qa/base.py", line 110, in _call
answer = self.combine_documents_chain.run(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/base.py", line 216, in run
return self(kwargs)[self.output_keys[0]]
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/base.py", line 116, in call
raise e
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/base.py", line 113, in call
outputs = self._call(inputs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/combine_documents/base.py", line 75, in _call
output, extra_return_dict = self.combine_docs(docs, other_keys)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/combine_documents/stuff.py", line 83, in combine_docs
return self.llm_chain.predict(inputs), {}
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/llm.py", line 151, in predict
return self(kwargs)[self.output_key]
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/base.py", line 116, in call
raise e
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/base.py", line 113, in call
outputs = self._call(inputs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/llm.py", line 57, in _call
return self.apply([inputs])[0]
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/llm.py", line 118, in apply
response = self.generate(input_list)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/llm.py", line 62, in generate
return self.llm.generate_prompt(prompts, stop)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/llms/base.py", line 107, in generate_prompt
return self.generate(prompt_strings, stop=stop)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/llms/base.py", line 140, in generate
raise e
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/llms/base.py", line 137, in generate
output = self._generate(prompts, stop=stop)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/llms/base.py", line 324, in _generate
text = self._call(prompt, stop=stop)
File "/home/aistudio/LangChain-ChatGLM-Webui/paddlepaddle/chatllm.py", line 31, in _call
results = chatbot(prompt_list)
File "/home/aistudio/.data/webide/pip/lib/python3.9/site-packages/paddlenlp/taskflow/taskflow.py", line 802, in call
results = self.task_instance(inputs)
File "/home/aistudio/.data/webide/pip/lib/python3.9/site-packages/paddlenlp/taskflow/task.py", line 522, in call
outputs = self._run_model(inputs)
File "/home/aistudio/.data/webide/pip/lib/python3.9/site-packages/paddlenlp/taskflow/text2text_generation.py", line 196, in _run_model
result = self._model.generate(
File "", line 2, in generate
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/paddle/fluid/dygraph/base.py", line 375, in _decorate_function
return func(*args, **kwargs)
File "/home/aistudio/.data/webide/pip/lib/python3.9/site-packages/paddlenlp/transformers/generation_utils.py", line 947, in generate
return self.sample(
File "/home/aistudio/.data/webide/pip/lib/python3.9/site-packages/paddlenlp/transformers/generation_utils.py", line 1136, in sample
next_tokens = paddle.multinomial(probs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/paddle/tensor/random.py", line 186, in multinomial
return _C_ops.multinomial(x, num_samples, replacement)
RuntimeError: (NotFound) The kernel with key (GPU, NCHW, float16) of kernel multinomial is not registered and fail to fallback to CPU one.
[Hint: Expected kernel_iter != iter->second.end(), but received kernel_iter == iter->second.end().] (at /paddle/paddle/phi/core/kernel_factory.cc:168)
在百度的云环境(BML CoderLab)中运行报错: (demo.py本地加入注释,所以行数显示不对,路径为get_knowledge_based_answer->knowledge_chainknowledge_chain) raceback (most recent call last): File "/home/aistudio/LangChain-ChatGLM-Webui/paddlepaddle/demo.py", line 123, in
resp = get_knowledge_based_answer(
File "/home/aistudio/LangChain-ChatGLM-Webui/paddlepaddle/demo.py", line 77, in get_knowledge_based_answer
result = knowledge_chain({"query": query})
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/base.py", line 116, in call
raise e
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/base.py", line 113, in call
outputs = self._call(inputs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/retrieval_qa/base.py", line 110, in _call
answer = self.combine_documents_chain.run(
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/base.py", line 216, in run
return self(kwargs)[self.output_keys[0]]
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/base.py", line 116, in call
raise e
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/base.py", line 113, in call
outputs = self._call(inputs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/combine_documents/base.py", line 75, in _call
output, extra_return_dict = self.combine_docs(docs, other_keys)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/combine_documents/stuff.py", line 83, in combine_docs
return self.llm_chain.predict(inputs), {}
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/llm.py", line 151, in predict
return self(kwargs)[self.output_key]
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/base.py", line 116, in call
raise e
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/base.py", line 113, in call
outputs = self._call(inputs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/llm.py", line 57, in _call
return self.apply([inputs])[0]
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/llm.py", line 118, in apply
response = self.generate(input_list)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/chains/llm.py", line 62, in generate
return self.llm.generate_prompt(prompts, stop)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/llms/base.py", line 107, in generate_prompt
return self.generate(prompt_strings, stop=stop)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/llms/base.py", line 140, in generate
raise e
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/llms/base.py", line 137, in generate
output = self._generate(prompts, stop=stop)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/langchain/llms/base.py", line 324, in _generate
text = self._call(prompt, stop=stop)
File "/home/aistudio/LangChain-ChatGLM-Webui/paddlepaddle/chatllm.py", line 31, in _call
results = chatbot(prompt_list)
File "/home/aistudio/.data/webide/pip/lib/python3.9/site-packages/paddlenlp/taskflow/taskflow.py", line 802, in call
results = self.task_instance(inputs)
File "/home/aistudio/.data/webide/pip/lib/python3.9/site-packages/paddlenlp/taskflow/task.py", line 522, in call
outputs = self._run_model(inputs)
File "/home/aistudio/.data/webide/pip/lib/python3.9/site-packages/paddlenlp/taskflow/text2text_generation.py", line 196, in _run_model
result = self._model.generate(
File "", line 2, in generate
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/paddle/fluid/dygraph/base.py", line 375, in _decorate_function
return func(*args, **kwargs)
File "/home/aistudio/.data/webide/pip/lib/python3.9/site-packages/paddlenlp/transformers/generation_utils.py", line 947, in generate
return self.sample(
File "/home/aistudio/.data/webide/pip/lib/python3.9/site-packages/paddlenlp/transformers/generation_utils.py", line 1136, in sample
next_tokens = paddle.multinomial(probs)
File "/opt/conda/envs/python35-paddle120-env/lib/python3.9/site-packages/paddle/tensor/random.py", line 186, in multinomial
return _C_ops.multinomial(x, num_samples, replacement)
RuntimeError: (NotFound) The kernel with key (GPU, NCHW, float16) of kernel
multinomial
is not registered and fail to fallback to CPU one. [Hint: Expected kernel_iter != iter->second.end(), but received kernel_iter == iter->second.end().] (at /paddle/paddle/phi/core/kernel_factory.cc:168)