timgit / pg-boss

Queueing jobs in Node.js using PostgreSQL like a boss
MIT License
1.73k stars 144 forks source link

Job expiration time limit #389

Open iccicci opened 1 year ago

iccicci commented 1 year ago

Hi all,

we was looking for an option to make a job never expire; we didn't found it in the docs. So we decided to make them expire in one year adding expireInHours in our options:

      await pgBoss.send(QUEUE_NAME, task, {
        expireInHours: 24 * 365,
        retryBackoff: true,
        retryDelay: 120,
        retryLimit: 10
      });

After this change, even though the tasks succesfully complete after a few seconds, in the job table we can see what follows:

select state, output from pgboss.job where name = 'pool-metadata';
 state |                                                                                                                                                                                                                          output
-------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 retry | {"name": "Error", "stack": "Error: handler execution exceeded 31536000000ms\n    at resolveWithinSeconds (/app/node_modules/pg-boss/src/manager.js:29:49)\n    at pMap.concurrency (/app/node_modules/pg-boss/src/manager.js:232:11)\n    at /app/node_modules/p-map/index.js:57:28\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (internal/process/task_queues.js:95:5)", "message": "handler execution exceeded 31536000000ms"}
 retry | {"name": "Error", "stack": "Error: handler execution exceeded 31536000000ms\n    at resolveWithinSeconds (/app/node_modules/pg-boss/src/manager.js:29:49)\n    at pMap.concurrency (/app/node_modules/pg-boss/src/manager.js:232:11)\n    at /app/node_modules/p-map/index.js:57:28\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (internal/process/task_queues.js:95:5)", "message": "handler execution exceeded 31536000000ms"}
 retry | {"name": "Error", "stack": "Error: handler execution exceeded 31536000000ms\n    at resolveWithinSeconds (/app/node_modules/pg-boss/src/manager.js:29:49)\n    at pMap.concurrency (/app/node_modules/pg-boss/src/manager.js:232:11)\n    at /app/node_modules/p-map/index.js:57:28\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (internal/process/task_queues.js:95:5)", "message": "handler execution exceeded 31536000000ms"}
 retry | {"name": "Error", "stack": "Error: handler execution exceeded 31536000000ms\n    at resolveWithinSeconds (/app/node_modules/pg-boss/src/manager.js:29:49)\n    at pMap.concurrency (/app/node_modules/pg-boss/src/manager.js:232:11)\n    at /app/node_modules/p-map/index.js:57:28\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (internal/process/task_queues.js:95:5)", "message": "handler execution exceeded 31536000000ms"}
 retry | {"name": "Error", "stack": "Error: handler execution exceeded 31536000000ms\n    at resolveWithinSeconds (/app/node_modules/pg-boss/src/manager.js:29:49)\n    at pMap.concurrency (/app/node_modules/pg-boss/src/manager.js:232:11)\n    at /app/node_modules/p-map/index.js:57:28\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (internal/process/task_queues.js:95:5)", "message": "handler execution exceeded 31536000000ms"}
 retry | {"name": "Error", "stack": "Error: handler execution exceeded 31536000000ms\n    at resolveWithinSeconds (/app/node_modules/pg-boss/src/manager.js:29:49)\n    at pMap.concurrency (/app/node_modules/pg-boss/src/manager.js:232:11)\n    at /app/node_modules/p-map/index.js:57:28\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (internal/process/task_queues.js:95:5)", "message": "handler execution exceeded 31536000000ms"}
 retry | {"name": "Error", "stack": "Error: handler execution exceeded 31536000000ms\n    at resolveWithinSeconds (/app/node_modules/pg-boss/src/manager.js:29:49)\n    at pMap.concurrency (/app/node_modules/pg-boss/src/manager.js:232:11)\n    at /app/node_modules/p-map/index.js:57:28\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (internal/process/task_queues.js:95:5)", "message": "handler execution exceeded 31536000000ms"}
 retry | {"name": "Error", "stack": "Error: handler execution exceeded 31536000000ms\n    at resolveWithinSeconds (/app/node_modules/pg-boss/src/manager.js:29:49)\n    at pMap.concurrency (/app/node_modules/pg-boss/src/manager.js:232:11)\n    at /app/node_modules/p-map/index.js:57:28\n    at runMicrotasks (<anonymous>)\n    at processTicksAndRejections (internal/process/task_queues.js:95:5)", "message": "handler execution exceeded 31536000000ms"}
(8 rows)

Since 31536000000ms is exactly the number of milliseconds we set as expireInHours option, I guess there is something not working as we are expecting.

Could you please tell me if there is a safe time expire limit for jobs or whatever else wrong you can see in our options? Could it be is there some bugs?

Thank you in advance, iCC

timgit commented 1 year ago

I think it's generally accepted that most tasks should be able to completed in a finite amount of time, or you would lose the ability to monitor the process and know if that task was working correctly or not. For example, see the visibility timeout limitations in AWS SQS, which is currently 12 hours.

iccicci commented 1 year ago

Thank you for your replay @timgit , but sorry, you are only expressing your opinion, not actually answering to the questions.

My opinion is: why not giving the user as much freedom as possible?

But let's leave the opinions alone and focus on the facts.

The main fact I reported is: with an expireInHours of one yaer and the succeffull completition of the job in a few seconds (which is much less than an yaer), the status of the job is retry with the reason handler execution exceeded ${expireInHours}. I would say there is some problem.

So my main question is: is the problem in the send call (in affermative case, please let me know) or is there something to review in pg-boss?

If the problem is that the expireInHours of one yaer is higher than the maximum expiration period allowed by design, it would be a good idea to add the maximum allowed exipiration time in the documentation or even better to immediately throw an exception when the user tries to provide a not allowed expiration period.

Thank you

timgit commented 1 year ago

I didn't realize you were reporting a bug. Ok, my best guess is that this exceeds a maximum integer value, since hours are converted into ms internally for the promise race.