Closed giampaolo closed 2 years ago
\<snippet> # bench.py import sched, time events = [] scheduler = sched.scheduler(time.time, time.sleep) for x in range(4000): scheduler.enter(1, 1, lambda: None, ()) t = time.time() for x in scheduler._queue: scheduler.cancel(x) print(time.time() - t) \</snippet>
Before the patch: 9.433167934417725
After the patch: 1.3120810985565186
I have another approach in mind, which avoids removing the element from the queue immediately, and which should be an order of magnitude faster, but I'll provide that as a separate patch since it poses questions about API and backward compatibility.
Can you post your other patch too? I would like to review both at the same time.
In attachment.
Before the patch: 9.433167934417725
After the patch: 0.0016150474548339844
scheduler.queue and scheduler.empty should be modified in accordance (which I haven't done, it's just to give you an idea).
New patch in attachment takes care of modifying empty() and queue property according with the new implementation. With this, the API behaves the same as before (this was my main concern). Also, it's smarter when it comes to cleaning up too many pending cancelled items:
if self._cancellations > 50 \
and self._cancellations > (len(self._queue) >> 1):
...
Also, I made a little benchmark script (in attachment) to make sure that the speed of the rest of the API hasn't been significantly affected by this change:
BEFORE THE PATCH
test_cancel : time=0.66648 : calls=1 : stdev=0.00000 test_empty : time=0.00026 : calls=1 : stdev=0.00000 test_enter : time=0.00309 : calls=1 : stdev=0.00000 test_queue : time=6.20777 : calls=1 : stdev=0.00000 test_run : time=0.00746 : calls=1 : stdev=0.00000
AFTER THE PATCH
test_cancel : time=0.00054 : calls=1 : stdev=0.00000 test_empty : time=0.00031 : calls=1 : stdev=0.00000 test_enter : time=0.00375 : calls=1 : stdev=0.00000 test_queue : time=6.30314 : calls=1 : stdev=0.00000 test_run : time=0.00716 : calls=1 : stdev=0.00000
Thread locks introduced in bpo-8684 should make this change more robust. If this patch is reasonable, I'd like to commit it before the one in bpo-8684 for simplicity. Raymond?
In principle this is the right approach. But the time of enter() is increased by 20%. Here is updated and slightly optimized patch that restores enter() performance (but a little slow down cancel()). Because enter() is executed for each event and cancel() is not, and enter's gain greater cancel's loss, I think it is a very profitable exchange.
Giampaolo, Raymond? What are your opinions?
Serhiy, perhaps it would be useful to see if such optimizations can apply to Tulip's (or asyncio's) event loop, since it will probably be the new standard in 3.4.
Yes, I will see.
I don't have time to look into Serhiy's changes right now but here's a brief summary:
there's a (I think) *minor* downside in terms of backward compatibility because scheduler._queue won't be updated after cancel() (basically this is the reason why this issue was waiting for Raymond's approval)
the upside (other than the great speedup) is that I doubt anyone relies on that and scheduler._queue is not supposed to be used in the first place
tulip's scheduler already provides something very similar to what this patch proposes so no action should be taken on that front
I personally think this should go in but I'd like to hear an OK from Raymond first.
Sorry, I failed to notice there's a scheduler.queue property which exposes the underlying _queue attribute so the patch should take that into account and return the updated list.
Patch in attachment applies cleanly with the current 3.4 code (last one wasn't) and returns an updated list on scheduler.queue.
I rebased my work starting from my original patch (cancel.patch) not Serhiy's because it wasn't clear to me *where* exactly the enter() speedup was introduced (enter() method apparently is untouched by cancel_2.patch).
it wasn't clear to me *where* exactly the enter() speedup was introduced
Constructing Event object. You introduced __init__().
Here is a patch which is based on my patch and new Giampaolo's patch. In additional it fixes a performance for the queue property (perhaps regression was introduced in bpo-18432).
Unpatched:
test_cancel : time=4.05863 : calls=1 : stdev=0.00000 test_empty : time=0.00499 : calls=1 : stdev=0.00000 test_enter : time=0.03537 : calls=1 : stdev=0.00000 test_queue : time=37.82003 : calls=1 : stdev=0.00000 test_run : time=0.05289 : calls=1 : stdev=0.00000
cancel3.patch:
test_cancel : time=0.00649 : calls=1 : stdev=0.00000 test_empty : time=0.00704 : calls=1 : stdev=0.00000 test_enter : time=0.03959 : calls=1 : stdev=0.00000 test_queue : time=45.34278 : calls=1 : stdev=0.00000 test_run : time=0.05477 : calls=1 : stdev=0.00000
cancel_4.patch:
test_cancel : time=0.00889 : calls=1 : stdev=0.00000 test_empty : time=0.00636 : calls=1 : stdev=0.00000 test_enter : time=0.03092 : calls=1 : stdev=0.00000 test_queue : time=3.93284 : calls=1 : stdev=0.00000 test_run : time=0.05294 : calls=1 : stdev=0.00000
All patches have problem with stable order. Rehashifying change it. But there are even more serious problems with current code (see bpo-19270).
In updated patch I have reverted queue optimization (this should be separated issue) and made some minor changes.
Fixed a bug in previous patch.
Serhiy, perhaps it would be useful to see if such optimizations can apply to Tulip's (or asyncio's) event loop, since it will probably be the new standard in 3.4.
asyncio was designed differently. Cancelling a task doesn't remove it from a list of pending tasks. Cancelled tasks are just skipped when the event loop executes tasks.
If you look more closely, a "task" can be a Handle, Future or Task object. A Handle object has a _cancelled attribute, its cancel() method just sets this attribute to True. It's almost the same for a Future object. In the context of a Task object, cancel() is very different because it sends a CancelledError exception into the running code.
I see no possible optimization here.
I see no possible optimization here.
The asyncio was just optimized to handle cancellation of many callbacks, see issue bpo-22448.
Ping.
I no longer think this should be done. For most applications, cancel() speed is the least important task and isn't worth adding any extra baggage to the run() loop. The current cancel() code is only slow if the length is somewhat large (atypical for a scheduling app). Also, to my eyes the patch more than doubles the complexity of the module (which can currently be almost completely understood by examining the short run-loop). Lastly, a lazy cancel() keeps the references around longer (which may be undesirable for some apps).
If you really think this module needs a lazy cancel(), then press ahead. Otherwise, we have no evidence that this a problem in the real world. The current cancel call is O(n) but runs at C speed which should be plenty fast enough for most cases.
This issue was created over a decade ago and the discussion indicates there is no consensus about the suggested optimisation. Any objections to closing it?
Is this module even useful? It looks like a basic algorithm I implemented over 3 decades ago. Do we need this in the stdlib? As was noted earlier, asyncio has a similar event queue but it's different enough that it couldn't reuse the code here (note that asyncio also manages its own clock, and doesn't use locks). I imagine that other packages needing something like this probably also mostly wrote their own (heapq is very handy for this!) rather than using this module (dang, the scheduler
class name doesn't even start with a capital S
, it's so old :-).
Is this module even useful?
In top PyPI 5000 projects, 5 projects use the sched
module:
Example with the newrelic code:
self._scheduler = sched.scheduler(self._harvest_timer, self._harvest_shutdown.wait)
...
self._scheduler.enter(event_harvest_config.report_period_ms / 1000.0, 1, self._harvest_flexible, ())
...
self._scheduler.run()
...
I used the command: ./search_pypi_top.py PYPI-2020-09-01/ 'import sched *$|from sched import' -q
.
If we would like to remove the module, we should start by deprecating it. https://peps.python.org/pep-0594/ doesn't schedule the removal of the sched module :-)
Okay, never mind then. (Thanks for the research!)
But let’s just close this.
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields: ```python assignee = 'https://github.com/serhiy-storchaka' closed_at = None created_at =
labels = ['library', '3.9', 'performance']
title = 'sched.py: speedup cancel() method'
updated_at =
user = 'https://github.com/giampaolo'
```
bugs.python.org fields:
```python
activity =
actor = 'vstinner'
assignee = 'serhiy.storchaka'
closed = False
closed_date = None
closer = None
components = ['Library (Lib)']
creation =
creator = 'giampaolo.rodola'
dependencies = []
files = ['23750', '23753', '23785', '23786', '27489', '32136', '32138', '32281']
hgrepos = []
issue_num = 13451
keywords = ['patch']
message_count = 20.0
messages = ['148103', '148105', '148108', '148403', '149292', '172377', '199978', '199988', '200012', '200015', '200016', '200017', '200023', '200043', '200792', '200802', '219316', '228045', '267780', '350176']
nosy_count = 5.0
nosy_names = ['rhettinger', 'pitrou', 'giampaolo.rodola', 'stutzbach', 'serhiy.storchaka']
pr_nums = ['22759']
priority = 'low'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'performance'
url = 'https://bugs.python.org/issue13451'
versions = ['Python 3.9']
```