Closed stas closed 10 years ago
@malthe I gave up trying to make postgres predict the next timeout.
The reason is that it would require always returning a task, which won't work with the query constraints for schedule_at <= now()
.
I found and fixed the issue with the leaking connection (I hope, at least by testing it locally).
Sorry nope, its not.
@malthe please take a look at the latest changes I've made.
There are a bunch of changes I've made to the way the timeout was handled before. The reasons for connection leakage was the exception which apparently was not handled at the same level where the connection was supposed to close, this way leaving it behind. (I might be wrong, but excluding the exception approach, fixed the issue.)
There's one caveat though, the dynamic timeout estimation will calculate the seconds queue needs to wait only after the first task is found and pulled. To me this actually makes sense, but I hope I'm not missing anything here.
To explain this, if we have 3 scheduled tasks, the queue will have to wait it's default timeout until the first task is processed and there's a new estimated timeout. This timeout will reset back after all 3 tasks are processed.
I updated the tests for timers. It looks cleaner now, thanks for the suggestions!
@malthe please take a look at the latest commit. Also, please share any other thoughts on what would you want to see done before this gets merged. Thanks.
@stas yep, let me have a look. I was just a little busy with other things.
Hey @malthe please take a look at the changes below.
This PR includes your work on #1 The changes I've made were required in order to avoid pq re-connect on every scheduled task this way exhausting any available database/pool connection.
I also updated
setup.py
to include required deps so that the tests could be run.The benchmarks show almost no changes in performance. Thank you in advance.