pytest-dev / pytest-xdist

pytest plugin for distributed testing and loop-on-failures testing modes.
https://pytest-xdist.readthedocs.io
MIT License
1.49k stars 232 forks source link

pytest-asyncio Hangs near finish using GPT-3.5 inference fixture #1084

Open junuMoon opened 6 months ago

junuMoon commented 6 months ago

Issue Description

When running pytest-asyncio with pytest-xdist, my tests consistently hang between 90-100% completion. I suspect there might be an issue with resource acquisition not being properly managed at the end of the test suite.

I am using a fixture for running GPT-3.5 inference.

It accepts --models as an argument and uses it to parameterize to create a test suite.

def pytest_generate_tests(metafunc):
    """Generate tests based on custom command-line options."""
    if "model" in metafunc.fixturenames and metafunc.config.option.models:
        models = metafunc.config.option.models
        if models == "all":
            models = list(ModelType)
        elif models == "served":
            models = list(ServedModelType)

        model_fixtures = []
        ids = []
        for model in models:
            if model not in ModelType:
                raise ValueError(f"Unknown model: {model}")
            model_fixtures.append(model)
            ids.append(model)

        metafunc.parametrize("model", model_fixtures, ids=ids, scope="session")

@pytest.mark.asyncio
async def test(model, agenerate_message):
    response = await agenerate_message(model=model)
    passed = "passed" in response
    assert passed

I added a session-scoped fixture to manage the event loop and ensure clean-up, the problem persists.

@pytest.fixture(scope="session")
def event_loop():
    loop = asyncio.get_event_loop()
    yield loop

    pending = asyncio.tasks.all_tasks(loop)
    loop.run_until_complete(asyncio.gather(*pending))
    loop.run_until_complete(asyncio.sleep(1))
    loop.close()