HypothesisWorks / hypothesis

Hypothesis is a powerful, flexible, and easy to use library for property-based testing.
https://hypothesis.works
Other
7.5k stars 581 forks source link

Error while patching time.perf_counter #4051

Closed galbino closed 1 month ago

galbino commented 1 month ago

Hi all. I have a function that I make use of time.perf_counter() to calculate latency, so I created an unit test patching that same function and added side_effects to the mock regarding the start and end time. For some time it worked perfectly, but after I updated hypothesis, this error started ocurring. From my investigation, it coincides with changes on the same file that it's happening and, from what I understand, the issue is that hypothesis is also using time.perf_counter() but, for some reason, still inside the context where I'm mocking so it's crashing because there's no items left on my side_effect list.

Here's a portion of code that I was able to reproduce the issue with:

async def latency_calc():
    import time
    start_time = time.perf_counter()
    print("doing something here")
    a = 1-1
    b = 1+1
    c = a+b
    assert c == 2
    latency = time.perf_counter() - start_time
    return latency

@pytest.mark.asyncio
@hypothesis.given(
    start_time=st.floats(min_value=0, allow_infinity=False, allow_nan=False),
    end_time=st.floats(min_value=0, allow_infinity=False, allow_nan=False),
    http_info=st.fixed_dictionaries(
        {
            "url": st.shared(
                hypothesis.provisional.urls()
                .map(str.lower)
                .map(lambda x: x.partition("#")[0]),
                key="url",
            ),
        }
    ),
    route=st.builds(
        starlette.routing.Route,
        path=st.text().map(lambda x: f"/{x}"),
        endpoint=st.none(),
    ),
)
async def test_patching_issue(
    start_time: float,
    end_time: float,
    http_info: dict[str, typing.Any],
    route: (
        starlette.routing.Route
        | starlette.routing.WebSocketRoute
        | starlette.routing.Mount
    ),
):
    with unittest.mock.patch("time.perf_counter") as mock:
        mock.side_effect = [start_time, end_time]
        latency = await latency_calc()
    assert latency == end_time-start_time

Some info that might be helpful: Version with this problem: any version after 6.103.0 Python version: 3.11.1 File where it happens: hypothesis/internal/conjecture/junkdrawer.py:436 Error raised:

Falsifying example: test_patching_issue(
    start_time=2.225073858507203e-309,
    end_time=1.8836429400948063e+273,
    http_info={'url': 'https://jld.z.jld.is/u%26o'},
    route=Route(path='/', endpoint=None),
)
Failed to reproduce exception. Expected: 
async def latency_calc():
        import time
>       start_time = time.perf_counter()

tests/unit/test_logging_middleware.py:215: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
../../../../.pyenv/versions/3.11.1/lib/python3.11/unittest/mock.py:1118: in __call__
    return self._mock_call(*args, **kwargs)
../../../../.pyenv/versions/3.11.1/lib/python3.11/unittest/mock.py:1122: in _mock_call
    return self._execute_mock_call(*args, **kwargs)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <MagicMock name='perf_counter' id='4495232464'>, args = (), kwargs = {}
effect = <list_iterator object at 0x10bee2e00>

    def _execute_mock_call(self, /, *args, **kwargs):
        # separate from _increment_mock_call so that awaited functions are
        # executed separately from their call, also AsyncMock overrides this method

        effect = self.side_effect
        if effect is not None:
            if _is_exception(effect):
                raise effect
            elif not _callable(effect):
>               result = next(effect)
E               StopIteration

../../../../.pyenv/versions/3.11.1/lib/python3.11/unittest/mock.py:1179: StopIteration

The above exception was the direct cause of the following exception:

start_time = 2.225073858507203e-309, end_time = 1.8836429400948063e+273
http_info = {'url': 'https://jld.z.jld.is/u%26o'}
route = Route(path='/', name='NoneType', methods=[])
jobh commented 1 month ago

Thanks for reporting @galbino!

This seems to be caused by our measuring of time spent in GC. A quick workaround may be to gc.disable()/gc.enable() in the test context.

Maybe we should store a reference early to the real perfcounter and use that.

Zac-HD commented 1 month ago

I think it'd be reasonable to assign _perf_counter = time.perf_counter() in here:

https://github.com/HypothesisWorks/hypothesis/blob/6e7921e19f9bbf87cd7cf3c4c2b0409aad0a2ee9/hypothesis-python/src/hypothesis/internal/conjecture/junkdrawer.py#L420-L433

...it might be annoying to deal with our own monkeypatching, though maybe just doing from time import perf_counter would let us skip the whole dance and just apply a different patch in that module?

jobh commented 1 month ago

I think I like your first proposal best, because even if it's more annoying it makes the why clear.

Let me throw up a patch to see just how annoying.

jobh commented 1 month ago

Hm, very annoying because how do we then test the gc accounting itself, ref test_should_not_fail_deadline_due_to_gc.