Open edwilde opened 3 months ago
Thanks for the report @edwilde. Keen to open a PR with the suggested change?
Yeah, I certainly will when I get chance @michalkleiner. I did a quick look over and it seems the unit tests are a bit lacking in that area too, so it might be longer than just a single line fix.
Module version(s) affected
dev-master
Description
I have two queues running on a project, the normal queue which has many tiny jobs which can break from time to time and a large queue with important jobs which shouldn't break.
I added a task to automatically check the health of only the large queue, however I now get reports whenever any job is broken.
In the normal queue, I don't really care too much if the tiny jobs break - they'll be recreated at some point and should be fine. So I don't really need to be notified of this.
I would assume that when a
queue
is specified for theCheckJobHealthTask
that only that queue would be checked for health, not the current behaviour which is to check the specific queue for jobs to be restarted and check all queues for broken jobs.How to reproduce
vendor/bin/sake dev/tasks/CheckJobHealthTask queue=3
to run a health check on the large queue (3) onlyExpected
The health check runs with no exception thrown
Actual
The health check throws an exception:
Exception: 1 jobs are unhealthy
😬Possible Solution
I believe the fix should be fairly simple, adding the extra queue restriction to the filter in QueuedJobService::checkJobHealth(), so that:
Becomes:
...like the others
Additional Context
No response
Validations
silverstripe/installer
(with any code examples you've provided)