Open KylinMountain opened 1 day ago
@KylinMountain Hello there! I'm here to help you with any bugs, questions, or contributions you may have. Let's get to the bottom of this together!
I found some similar issues that might help you resolve the "Event loop is closed" error and connection issues:
Open Issue: How can I set config parameters for test data generation
max_workers
, rate_limits
, and timeouts
to avoid connection timeout errors. Example:
from ragas.testset.generator import TestsetGenerator
from ragas.testset.evolutions import simple, reasoning, multi_context
from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from ragas.run_config import RunConfig
generator_llm = ChatOpenAI(model="gpt-3.5-turbo-16k") critic_llm = ChatOpenAI(model="gpt-4") embeddings = OpenAIEmbeddings()
run_config = RunConfig( max_workers=5, rate_limits={"requests_per_minute": 60}, timeouts={"connect_timeout": 10, "read_timeout": 30} )
generator = TestsetGenerator.from_langchain( generator_llm=generator_llm, critic_llm=critic_llm, embeddings=embeddings, run_config=run_config )
testset = generator.generate_with_langchain_docs( documents=documents, test_size=10, distributions={simple: 0.5, reasoning: 0.3, multi_context: 0.2}, with_debugging_logs=False, is_async=True, raise_exceptions=False, run_config=run_config )
Closed Issue: Given code @ https://github.com/explodinggradients/ragas is not working and raises an exception
nest_asyncio
library to allow nested use of asyncio.run()
. Example:
import nest_asyncio
nest_asyncio.apply()
Closed Issue: Error coming in example code for Answer Correctness
is_async
parameter to False
when calling the evaluate
function to avoid conflicts with the existing event loop. Example:
score = evaluate(dataset, metrics=[answer_correctness], is_async=False)
To resolve the "Event loop is closed" error in your specific case, you can modify your code to ensure the event loop is properly managed. Here is an example:
import asyncio
from ragas.testset.generator import TestsetGenerator
async def main():
generator_llm = ChatOpenAI(model_name="phi3.5:latest", openai_api_base="http://localhost:11434/v1", temperature=0.7)
critic_llm = ChatOpenAI(model_name="phi3.5:latest", openai_api_base="http://localhost:11434/v1", temperature=0.7)
hf_embeddings = HuggingFaceEmbeddings(model_name="BAAI/bge-base-zh-v1.5", model_kwargs={"device": "mps"})
generator = TestsetGenerator.from_langchain(
generator_llm,
critic_llm,
hf_embeddings,
run_config=RunConfig(max_workers=1, seed=42, max_retries=3, log_tenacity=True, timeout=180),
)
testset = await generator.generate_with_langchain_docs(
documents=documents,
test_size=10,
distributions={simple: 0.5, reasoning: 0.25, multi_context: 0.25},
run_config=RunConfig(max_workers=1, seed=42, max_retries=1, log_tenacity=True),
with_debugging_logs=True,
is_async=True
)
ds = testset.to_dataset()
ds.save_to_disk("./activity_testset")
if __name__ == "__main__":
asyncio.run(main())
This should help manage the event loop correctly and avoid the "Event loop is closed" error [1][2][3][4].
To continue talking to Dosu, mention @dosu.
@shahules786 Do you know why? I am always getting this error, it's upset.
hey @KylinMountain that is a bummer. We have a new version of testset generation that is out - will you consider trying that out?
you will have to install it from source though
@jjmachan I would like to try, is there any instructions how to use new version? Is it same with current api? I’ve already install from source code
[x] I have checked the documentation and related resources and couldn't resolve my bug.
Describe the bug LLM is started by ollama, so there's no connection issue and it is able to extract key phrase, but it always failed in generating.
Ragas version: latest source code. Python version: 3.10
Code to Reproduce
Error trace
Expected behavior Should be able to generate testdataset.
Additional context Add any other context about the problem here.