Closed liuruit closed 4 months ago
为什么我选择tongyi模型,它返回不出来最后评估的结果,只有openai才可以吗 下面是我在colab上面跑的代码出的错误
evaluating with [faithfulness] 0%| | 0/1 [00:00<?, ?it/s]ERROR:dashscope:Unsupported atom data type: <class 'langchain_core.prompts.chat.ChatPromptTemplate'> 0%| | 0/1 [00:00<?, ?it/s] --------------------------------------------------------------------------- UnsupportedDataType Traceback (most recent call last) [<ipython-input-81-5fa2ac208107>](https://localhost:8080/#) in <cell line: 3>() 1 from ragas.llama_index import evaluate 2 ----> 3 result = evaluate(query_engine, metrics, eval_questions, eval_answers) 25 frames [/usr/local/lib/python3.10/dist-packages/dashscope/io/input_output.py](https://localhost:8080/#) in resolve_input(input, is_encode_binary, custom_type_resolver) 109 return custom_type_resolver[type(input)](input) 110 else: --> 111 raise UnsupportedDataType('Unsupported atom data type: %s' % 112 type(input)) UnsupportedDataType: Unsupported atom data type: <class 'langchain_core.prompts.chat.ChatPromptTemplate'>
which version of Ragas are you using @liuruit ? llamaindex is not support for v0.1 at the moment sadly - you can track #557 for updates
will get this fixed for you as fast as we can Liuruit :)
为什么我选择tongyi模型,它返回不出来最后评估的结果,只有openai才可以吗 下面是我在colab上面跑的代码出的错误