The current implementation was leaking sqlite file descriptors each time the periodic task run (~3 per each synced feed),
eventually causing "too many open files" errors and blocking the application.
This is because we are calling create_huey_app inside the decorator, for each task run. This produced a new sqlalchey engine and thus a new connection pool to be created each time, but not released.
One way around this was to explicitly db.engine.dispose() as part of the decorator, after the task ran.
But it seems more reasonable to instantiate the huey application once (and this have a single engine/connection pool), and manage the concurrent connections via the pool settings. This PR does that by setting limit to the MiniHuey pool size, and a matching limit to the sql engine connection pool size. This way the max concurrent connections are capped and the same connections can be used again on subsequent cron task runs.
I'll merge this and monitor file descriptor behavior over time to verify it works as expected.
The current implementation was leaking sqlite file descriptors each time the periodic task run (~3 per each synced feed), eventually causing "too many open files" errors and blocking the application.
This is because we are calling create_huey_app inside the decorator, for each task run. This produced a new sqlalchey engine and thus a new connection pool to be created each time, but not released.
One way around this was to explicitly
db.engine.dispose()
as part of the decorator, after the task ran. But it seems more reasonable to instantiate the huey application once (and this have a single engine/connection pool), and manage the concurrent connections via the pool settings. This PR does that by setting limit to the MiniHuey pool size, and a matching limit to the sql engine connection pool size. This way the max concurrent connections are capped and the same connections can be used again on subsequent cron task runs.I'll merge this and monitor file descriptor behavior over time to verify it works as expected.