pytest-dev / pytest-timeout

MIT License
216 stars 64 forks source link

README.rst: The timeout is not limited to integers #54

Closed davesteele closed 4 years ago

davesteele commented 4 years ago

The integer requirement was apparently removed in version 1.2.0, but the README does not reflect the change.

flub commented 4 years ago

It's still my opinion that if you need fractional seconds you might not be using pytest-timeout correctly. But anyway, it's enforced in the unittests so sure.

flub commented 4 years ago

Thanks for caring though! :)

davesteele commented 4 years ago

I wanted:

@pytest.mark.timeout(0.1, method="signal") def testperformance(): for in range(100): foo()

flub commented 4 years ago

On Thu 30 Jan 2020 at 14:23 -0800, David Steele wrote:

I wanted:

@pytest.mark.timeout(0.1, method="signal") def testperformance(): for in range(100): foo()

The time source pytest-timeout uses is not reliable for performance testing though. To me this looks inherently flaky as it depends on the machine you execute on, how much other things are being scheduled etc. What do you do when a test like this fails?

I'm honestly asking because things in this area come up often and I'd like to understand this usecase better. Because if this provides long-term benefit to people perhaps there's ways in which the pytest-timeout API could be more helpful for those usecases.

davesteele commented 4 years ago

I wanted a timeout function or context manager for things like the performance test listed, replacing some ugly datetime timedelta code. All proposed solutions I found used the same strategy you do, with few packaged options.

I use it for TDD. If it fails a lot, I'll start looking at performance issues. I'm OK with something like a 20% error.

Maybe what I need is a context manager that wraps the datetime timedelta logic, for fine grained accuracy. An actual signal-based timeout on top of that would be useful for runaway tasks.

flub commented 4 years ago

On Sat 01 Feb 2020 at 16:27 -0800, David Steele wrote:

I'm OK with something like a 20% error.

Wow, you have very different expectations from your tests than me it seems.

Maybe what I need is a context manager that wraps the datetime timedelta logic, for fine grained accuracy.

Yes, I also think a context manager is much better for performance oriented problems.

An actual signal-based timeout on top of that would be useful for runaway tasks.

Terminating things is messy, error prone and far from always safe. I'd leave the whole termination part out of something checking for performance. If you're running something in a loop and still expect sub-second wall-clock time than why care if your test ends up taking dozens of seconds and than fails?