modelscope / evalscope

A streamlined and customizable framework for efficient large model evaluation and performance benchmarking
https://evalscope.readthedocs.io/en/latest/
Apache License 2.0
221 stars 30 forks source link

RAGAS评测- No module named 'evalscope.backend.rag_eval.utils #170

Open jackqdldd opened 2 days ago

jackqdldd commented 2 days ago

File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/evalscope/backend/rag_eval/init.py", line 1, in from evalscope.backend.rag_eval.utils.embedding import EmbeddingModel ModuleNotFoundError: No module named 'evalscope.backend.rag_eval.utils'

Yunnglin commented 2 days ago

请问安装方式是什么呢?如果是使用 pip包安装的话,请尝试换成使用main分支的代码安装,pip包更新可能不及时

jackqdldd commented 2 days ago

用代码安装后上述问题解决了,但是执行的时候报错: testset_generation.py:132: LangChainDeprecationWarning: The classUnstructuredFileLoaderwas deprecated in LangChain 0.2.8 and will be removed in 1.0. An updated version of the class exists in the :class:~langchain-unstructured package and should be used instead. To use it run pip install -U :class:~langchain-unstructuredand import asfrom :class:~langchain_unstructured import UnstructuredLoader``. 按上面的更新后继续执行,还是不对 File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/pydantic/main.py", line 212, in __init__ validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self) pydantic_core._pydantic_core.ValidationError: 2 validation errors for LocalLLM model_name_or_path Field required [type=missing, input_value={'model_name': '/home/alg...': {'temperature': 0.2}}, input_type=dict] For further information visit https://errors.pydantic.dev/2.9/v/missing model Field required [type=missing, input_value={'model_name': '/home/alg...': {'temperature': 0.2}}, input_type=dict] For further information visit https://errors.pydantic.dev/2.9/v/missing

这是我的执行代码 image `generate_testset_task_cfg = { "eval_backend": "RAGEval", "eval_config": { "tool": "RAGAS", "testset_generation": { "docs": ["/home/alg/qdl/rags/docs/zhidu_1.txt", "/home/alg/qdl/rags/docs/zhidu_2.txt", "/home/alg/qdl/rags/docs/zhidu_3.txt"], "test_size": 10, "output_file": "outputs/testset.json", "distribution": {"simple": 0.1, "multi_context": 0.4, "reasoning": 0.5}, "generator_llm": { "model_name_or_path": "/home/alg/qdl/model/Qwen2_5-7B-Instruct", "template_type": "qwen", "generation_config": {"temperature": 0.2} }, "embeddings": { "model_name_or_path": "/home/alg/qdl/model/CompassJudger-1-7B-Instruct", }, "language": "chinese" } }, }

from evalscope.run import run_task from evalscope.utils.logger import get_logger

logger = get_logger()

Run task

run_task(task_cfg=generate_testset_task_cfg) `

Yunnglin commented 1 day ago
  1. 下面这一段是warning,不是报错,不影响使用:

    testset_generation.py:132: LangChainDeprecationWarning: The class UnstructuredFileLoader was deprecated in LangChain 0.2.8 and will be removed in 1.0. An updated version of the class exists in the :class:~langchain-unstructured package and should be used instead. To use it run pip install -U :class:~langchain-unstructuredand import asfrom :class:~langchain_unstructured import UnstructuredLoader``. 

    升级后可能导致环境不兼容,ragas库目前需要langchain版本低于0.3,参考langchain库版本如下,在evalscope setup.py目录下运行pip install -e '.[rag]'

    langchain                                0.2.16
    langchain-chroma                         0.1.4
    langchain-community                      0.2.16
    langchain-core                           0.2.40
    langchain-openai                         0.1.23
    langchain-text-splitters                 0.2.4
    langchain-unstructured                   0.1.4
  2. embeddings部分应该使用embedding模型,而不是CompassJudger-1-7B-Instruct,例如AI-ModelScope/bge-large-zh

jackqdldd commented 1 day ago

更新后可以执行了,有下面报错: File "/home/alg/qdl/evalscope/package/evalscope/models/model_adapter.py", line 422, in init self.generation_config.update(**custom_generation_config.to_dict()) AttributeError: 'dict' object has no attribute 'to_dict'

Yunnglin commented 1 day ago

应该是个bug,我们修复下

jackqdldd commented 1 day ago

继续执行报错: File "/home/alg/qdl/evalscope/package/evalscope/backend/rag_eval/ragas/tasks/testset_generation.py", line 162, in generate_testset generator = TestsetGenerator.from_langchain(generator_llm) TypeError: TestsetGenerator.from_langchain() missing 1 required positional argument: 'embedding_model'

更新后可以继续执行,执行过程有报错: image

Yunnglin commented 1 day ago

继续执行报错: File "/home/alg/qdl/evalscope/package/evalscope/backend/rag_eval/ragas/tasks/testset_generation.py", line 162, in generate_testset generator = TestsetGenerator.from_langchain(generator_llm) TypeError: TestsetGenerator.from_langchain() missing 1 required positional argument: 'embedding_model'

更新后可以继续执行,执行过程有报错: image

这个看起来是ragas昨天更新了新版本导致的,我一并修了

Yunnglin commented 1 day ago

请拉取最新的代码尝试一下

jackqdldd commented 1 day ago

image

jackqdldd commented 1 day ago

image

Yunnglin commented 1 day ago

参考设置 "generation_config": {"do_sample":True, "temperature": 0.1, "max_new_tokens": 2048}或者去掉generation_config

jackqdldd commented 1 day ago

Applying [SummaryExtractor, HeadlinesExtractor]: 0%| | 0/2 [00:00<?, ?it/s]2024-10-31 13:48:43,560 - ragas.testset.transforms.engine - ERROR - unable to apply transformation: 'Generation' object has no attribute 'message' Applying [SummaryExtractor, HeadlinesExtractor]: 50%|█████ | 1/2 [00:08<00:08, 8.96s/it]]2024-10-31 13:48:43,560 - ragas.testset.transforms.engine - ERROR - unable to apply transformation: 'Generation' object has no attribute 'message'2024-10-31 13:49:41,834 - ragas.testset.transforms.engine - ERROR - unable to apply transformation: 'Generation' object has no attribute 'message' Applying EmbeddingExtractor: 0%| | 0/1 [00:00<?, ?it/s]2024-10-31 13:49:41,837 - ragas.testset.transforms.engine - ERROR - unable to apply transformation: node.property('summary') must be a string, found '<class 'NoneType'>' Applying HeadlineSplitter: 0%| | 0/1 [00:00<?, ?it/s]2024-10-31 13:49:41,837 - ragas.testset.transforms.engine - ERROR - unable to apply transformation: 'headlines' property not found in this node Batches: 100%|██████████| 1/1 [00:00<00:00, 12.90it/s]leExtractor]: 0%| | 0/3 [00:00<?, ?it/s] 2024-10-31 13:49:49,124 - ragas.testset.transforms.engine - ERROR - unable to apply transformation: 'Generation' object has no attribute 'message' Applying [EmbeddingExtractor, KeyphrasesExtractor, TitleExtractor]: 67%|██████▋ | 2/3 [00:07<00:03, 3.64s/it]2024-10-31 13:49:53,289 - ragas.testset.transforms.engine - ERROR - unable to apply transformation: 'Generation' object has no attribute 'message' Traceback (most recent call last): File "/home/alg/qdl/rags/evalscope_ragas_generate.py", line 32, in run_task(task_cfg=generate_testset_task_cfg) File "/home/alg/qdl/evalscope/package/evalscope/run.py", line 213, in run_task rag_eval_backend_manager.run() File "/home/alg/qdl/evalscope/package/evalscope/backend/rag_eval/backend_manager.py", line 75, in run self.run_ragas(testset_args, eval_args) File "/home/alg/qdl/evalscope/package/evalscope/backend/rag_eval/backend_manager.py", line 54, in run_ragas generate_testset(TestsetGenerationArguments(**testset_args)) File "/home/alg/qdl/evalscope/package/evalscope/backend/rag_eval/ragas/tasks/testset_generation.py", line 167, in generate_testset testset = generator.generate_with_langchain_docs( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/testset/synthesizers/generate.py", line 126, in generate_with_langchain_docs apply_transforms(kg, transforms) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/testset/transforms/engine.py", line 106, in apply_transforms transform.generate_execution_plan(kg), File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/testset/transforms/base.py", line 325, in generate_execution_plan filtered_kg = self.filter(kg) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/testset/transforms/relationship_builders/cosine.py", line 122, in filter raise ValueError(f"Node {node.id} has no {self.property_name}") ValueError: Node a004b0db-60ab-4346-8b05-306d5dc01cec has no summary_embedding

Yunnglin commented 1 day ago

请更换模型generator_llm,使用指令遵循能力更强的模型,例如72B-int4量化模型,较小的模型在执行复杂指令时输出可能不对,导致出错

jackqdldd commented 1 day ago

Qwen2_5-7B-Instruct 不行么,我看文档里是这个

Yunnglin commented 1 day ago

我试过了,不太行,也是一样的报错😭;但换成更大的模型是可以的,也可以用一些闭源模型,像gpt4o

我再更新一下文档

Yunnglin commented 1 day ago

这是环境里面缺少了gptq所需要的包,尝试根据报错信息安装一下 pip install optimum

jackqdldd commented 1 day ago

GenerationConfig 不能为空? Traceback (most recent call last): File "/home/alg/qdl/rags/evalscope_ragas_generate.py", line 32, in run_task(task_cfg=generate_testset_task_cfg) File "/home/alg/qdl/evalscope/package/evalscope/run.py", line 213, in run_task rag_eval_backend_manager.run() File "/home/alg/qdl/evalscope/package/evalscope/backend/rag_eval/backend_manager.py", line 75, in run self.run_ragas(testset_args, eval_args) File "/home/alg/qdl/evalscope/package/evalscope/backend/rag_eval/backend_manager.py", line 54, in run_ragas generate_testset(TestsetGenerationArguments(testset_args)) File "/home/alg/qdl/evalscope/package/evalscope/backend/rag_eval/ragas/tasks/testset_generation.py", line 147, in generate_testset generator_llm = LLM.load(args.generator_llm) File "/home/alg/qdl/evalscope/package/evalscope/backend/rag_eval/utils/llm.py", line 21, in load return LocalLLM(kw) File "/home/alg/qdl/evalscope/package/evalscope/backend/rag_eval/utils/llm.py", line 43, in init generation_config=GenerationConfig(self.generation_config), TypeError: modelscope.utils.hf_util.GenerationConfig() argument after ** must be a mapping, not NoneType

generate_testset_task_cfg = { "eval_backend": "RAGEval", "eval_config": { "tool": "RAGAS", "testset_generation": { "docs": ["/home/alg/qdl/rags/docs/zhidu_1.txt", "/home/alg/qdl/rags/docs/zhidu_2.txt", "/home/alg/qdl/rags/docs/zhidu_3.txt"], "test_size": 10, "output_file": "outputs/testset.json", "distribution": {"simple": 0.1, "multi_context": 0.4, "reasoning": 0.5}, "generator_llm": { "model_name_or_path": "/home/alg/qdl/model/Qwen2.5-72B-Instruct-GPTQ-Int4", "template_type": "qwen" }, "embeddings": { "model_name_or_path": "/home/alg/qdl/model/BAAI/bge-large-zh-v1.5", }, "language": "chinese" } } }

Yunnglin commented 1 day ago

麻烦再拉取代码尝试一下

jackqdldd commented 1 day ago

一直卡在这里 image image

Yunnglin commented 1 day ago

推理速度比较慢,请等待一下,或尝试减少txt文档中的内容,看是否能跑通

jackqdldd commented 1 day ago

Generating Scenarios: 0%| | 0/3 [00:00<?, ?it/s]2024-10-31 16:49:32,896 - ragas.testset.synthesizers.abstract_query - INFO - found 0 clusters 2024-10-31 16:49:32,896 - ragas.testset.synthesizers.abstract_query - INFO - generating 4 common_themes 2024-10-31 16:49:32,901 - ragas.testset.synthesizers.abstract_query - INFO - found 3 clusters 2024-10-31 16:49:32,901 - ragas.testset.synthesizers.abstract_query - INFO - generating 2 themes Traceback (most recent call last): File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/openai/_base_client.py", line 1572, in _request response = await self._client.send( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/httpx/_client.py", line 1674, in send response = await self._send_handling_auth( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/httpx/_client.py", line 1702, in _send_handling_auth response = await self._send_handling_redirects( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/httpx/_client.py", line 1739, in _send_handling_redirects response = await self._send_single_request(request) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/httpx/_client.py", line 1776, in _send_single_request response = await transport.handle_async_request(request) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/httpx/_transports/default.py", line 377, in handle_async_request resp = await self._pool.handle_async_request(req) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 216, in handle_async_request raise exc from None File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 196, in handle_async_request response = await connection.handle_async_request( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/httpcore/_async/connection.py", line 101, in handle_async_request return await self._connection.handle_async_request(request) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/httpcore/_async/http11.py", line 142, in handle_async_request await self._response_closed() File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/httpcore/_async/http11.py", line 257, in _response_closed await self.aclose() File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/httpcore/_async/http11.py", line 265, in aclose await self._network_stream.aclose() File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/httpcore/_backends/anyio.py", line 55, in aclose await self._stream.aclose() File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 1258, in aclose self._transport.close() File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/asyncio/selector_events.py", line 706, in close self._loop.call_soon(self._call_connection_lost, None) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/asyncio/base_events.py", line 753, in call_soon self._check_closed() File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/asyncio/base_events.py", line 515, in _check_closed raise RuntimeError('Event loop is closed') RuntimeError: Event loop is closed

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/alg/qdl/rags/evalscope_ragas_generate.py", line 56, in run_task(task_cfg=generate_testset_task_cfg) File "/home/alg/qdl/evalscope/package/evalscope/run.py", line 213, in run_task rag_eval_backend_manager.run() File "/home/alg/qdl/evalscope/package/evalscope/backend/rag_eval/backend_manager.py", line 75, in run self.run_ragas(testset_args, eval_args) File "/home/alg/qdl/evalscope/package/evalscope/backend/rag_eval/backend_manager.py", line 54, in run_ragas generate_testset(TestsetGenerationArguments(testset_args)) File "/home/alg/qdl/evalscope/package/evalscope/backend/rag_eval/ragas/tasks/testset_generation.py", line 167, in generate_testset testset = generator.generate_with_langchain_docs( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/testset/synthesizers/generate.py", line 129, in generate_with_langchain_docs return self.generate( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/testset/synthesizers/generate.py", line 254, in generate raise e File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/testset/synthesizers/generate.py", line 251, in generate scenario_sample_list: t.List[t.List[BaseScenario]] = exec.results() File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py", line 146, in results results = asyncio.run(_aresults()) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete return future.result() File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py", line 141, in _aresults r = await future File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/asyncio/tasks.py", line 571, in _wait_for_one return f.result() # May raise f.exception(). File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py", line 36, in sema_coro return await coro File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py", line 81, in wrapped_callable_async raise e File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py", line 78, in wrapped_callable_async result = await callable(*args, *kwargs) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/testset/synthesizers/base.py", line 89, in generate_scenarios scenarios = await self._generate_scenarios( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/testset/synthesizers/abstract_query.py", line 263, in _generate_scenarios common_concepts: t.List[Concepts] = run_async_batch( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py", line 167, in run_async_batch return executor.results() File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py", line 146, in results results = asyncio.run(_aresults()) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/nest_asyncio.py", line 30, in run return loop.run_until_complete(task) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/nest_asyncio.py", line 98, in run_until_complete return f.result() File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/asyncio/futures.py", line 201, in result raise self._exception.with_traceback(self._exception_tb) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/asyncio/tasks.py", line 232, in step result = coro.send(None) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py", line 141, in _aresults r = await future File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/asyncio/tasks.py", line 571, in _wait_for_one return f.result() # May raise f.exception(). File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/asyncio/futures.py", line 201, in result raise self._exception.with_traceback(self._exception_tb) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/asyncio/tasks.py", line 232, in step result = coro.send(None) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py", line 36, in sema_coro return await coro File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py", line 81, in wrapped_callable_async raise e File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py", line 78, in wrapped_callable_async result = await callable(args, kwargs) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/prompt/pydantic_prompt.py", line 130, in generate output_single = await self.generate_multiple( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/prompt/pydantic_prompt.py", line 190, in generate_multiple resp = await llm.generate( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/llms/base.py", line 100, in generate result = await agenerate_text_with_retry( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/tenacity/asyncio/init.py", line 189, in async_wrapped return await copy(fn, *args, kwargs) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/tenacity/asyncio/init.py", line 111, in call do = await self.iter(retry_state=retry_state) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/tenacity/asyncio/init.py", line 153, in iter result = await action(retry_state) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/tenacity/_utils.py", line 99, in inner return call(*args, *kwargs) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/tenacity/init.py", line 398, in self._add_action_func(lambda rs: rs.outcome.result()) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/concurrent/futures/_base.py", line 451, in result return self.get_result() File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/concurrent/futures/_base.py", line 403, in get_result raise self._exception File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/tenacity/asyncio/init.py", line 114, in call result = await fn(args, kwargs) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/llms/base.py", line 220, in agenerate_text return await self.langchain_llm.agenerate_prompt( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 787, in agenerate_prompt return await self.agenerate( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 747, in agenerate raise exceptions[0] File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/asyncio/tasks.py", line 232, in __step result = coro.send(None) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 923, in _agenerate_with_cache result = await self._agenerate( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/langchain_openai/chat_models/base.py", line 815, in _agenerate response = await self.async_client.create(*payload) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 1490, in create return await self._post( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/openai/_base_client.py", line 1839, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/openai/_base_client.py", line 1533, in request return await self._request( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/openai/_base_client.py", line 1606, in _request raise APIConnectionError(request=request) from err openai.APIConnectionError: Connection error. 2024-10-31 16:49:33,464 - asyncio - ERROR - Task exception was never retrieved future: <Task finished name='Task-334' coro=<as_completed..sema_coro() done, defined at /home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py:34> exception=RuntimeError('Event loop stopped before Future completed.')> Traceback (most recent call last): File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py", line 36, in sema_coro return await coro File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py", line 81, in wrapped_callable_async raise e File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py", line 78, in wrapped_callable_async result = await callable(args, **kwargs) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/testset/synthesizers/base.py", line 89, in generate_scenarios scenarios = await self._generate_scenarios( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/testset/synthesizers/abstract_query.py", line 119, in _generate_scenarios themes: t.List[Themes] = run_async_batch( File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py", line 167, in run_async_batch return executor.results() File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py", line 146, in results results = asyncio.run(_aresults()) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/nest_asyncio.py", line 30, in run return loop.run_until_complete(task) File "/home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/nest_asyncio.py", line 96, in run_until_complete raise RuntimeError( RuntimeError: Event loop stopped before Future completed.

jackqdldd commented 1 day ago

llm 模型换成API模型,Applying [EmbeddingExtractor, KeyphrasesExtractor, TitleExtractor]: 执行完成,Generating Scenarios: 报错了,

Yunnglin commented 1 day ago

请问使用的是什么API模型,是否存在网络问题

jackqdldd commented 1 day ago

前面调用都是通的 image

Yunnglin commented 1 day ago

大概等待多久会报错,尝试修改下面 timeout 和 max_wait的值呢 https://github.com/modelscope/evalscope/blob/da0d9447aaf8da52231d7a702dfef63163f12510/evalscope/backend/rag_eval/ragas/tasks/testset_generation.py#L164

jackqdldd commented 1 day ago

不行,每次到Generating Scenarios: 0%| 这一步立马报错,前面都是可以的

jackqdldd commented 1 day ago

Task exception was never retrieved future: <Task finished name='Task-334' coro=<as_completed..sema_coro() done, defined at /home/alg/anaconda3/envs/evalscope/lib/python3.10/site-packages/ragas/executor.py:34> exception=RuntimeError('Event loop stopped before Future completed.')>

Yunnglin commented 1 day ago

我这边没有复现这个问题,可能是ragas的问题,可以给他们提issue反馈一下 https://github.com/explodinggradients/ragas/issues

jackqdldd commented 1 day ago

请问你这边的ragas版本是多少

Yunnglin commented 1 day ago

最新版本0.2.3

jackqdldd commented 22 hours ago

2块46G的L20 GPU,脚本这样指定就行了吗 os.environ['CUDA_VISIBLE_DEVICES'] = '6,7'

jackqdldd commented 22 hours ago

image 卡住了

Yunnglin commented 22 hours ago

建议用vllm来启动推理服务,使用对应的url,可能快一点

jackqdldd commented 22 hours ago

llm 用URL,embeddings 用本地的?

Yunnglin commented 21 hours ago

是的,embedding本地加载即可,embedding计算比较快