Open glensc opened 4 years ago
in php-resque seems there is implemented own status handler, that is stored under redis key:
so, this should be created as own plugin for node-resque that updates $id:status
key in redis on beforePerform
/afterPerform
methods?
this thread suggests to do CheckOnCPUIntensivTask
polling, no actual example:
is it intended that each job handle their status themselves in some external storage?
Altho, if writing plugin, I would rather attach it to worker via the .on
calls
Seems ruby also has status object?
EDIT: it's extra plugin for ruby world:
either way, all the integration points are sync methods, but this.queueObject.connection.redis
(in job plugin) is async, so can't call async methods from beforePerform
...
There is not any tracking for job status implemented in node-resque
per-se. It would be a great addition to add! The ruby plugin (https://github.com/quirkey/resque-status) you found has a pleasant api that could be implemented, although something more basic would also be helpful. At the minimum, perhaps we assign unique UUIDs to each task that we can look up.
What is available at the moment is the workingOn
method(s) which will show you what the workers are doing. If you are able to identify a specific job by its arguments, you can see that the job is in progress, and how long it has been running for.
here's plugin I created to work with php-resque status class:
it's not perfect, but does something a like
@evantahler I read the resque-status plugin description now. I don't like that it requires jobs to change. I'd implement the status details in worker side, so the jobs remain unmodified. This allows flexibility to enable disable the status integration without modifying the jobs.
Conveyor MQ has a feature for being notified once a task is compelte/finished by using onTaskComplete
:
const task = await manager.enqueueTask({ data: { x: 1, y: 2 } });
await manager.onTaskComplete(task.id);
console.log('Task has completed!');
Nice reference @jasrusable! We were talking about this a little above in https://github.com/actionhero/node-resque/issues/334#issuecomment-607370326. I think the way to get this working in node-resque
, and still stay more-or-less compatible with the other resque packages would be:
1) add an extra argument to every job enqueued that contains {__jobId: UUID}
.
enqueue
, enqueueAt
, and enqueueIn
commands would need to return that ID:
const jobId = await queue.enqueue("math", "add", [1, 2]);
2) Decide if we want a polling system or a broadcast system to communicate worker/job status.
I'd vote for a polling system due to the resiliency argument, but I think a pub/sub system also would be interesting to consider.
3) Use a worker middleware to fire events & store data about job's life-cycle (started
, complete
, error
at minimum)
Theawait
pattern in Conveyor MQ is interesting - do you really want to block execution (await) if the job isn't done yet? I'd prefer a lookup-by-job ID api like:
const status = await queue.status(jobId); // status = ['complete', 'in-progress', 'enqueued', 'error']
Does anyone on this thread want to tackle this feature? We can discuss on the Actionhero slack channel @ http://slack.actionherojs.com/
Also of note, there will be some interesting side effects to the queue.delDelayed
methods, and related enqueueAt/in commands. The notion of a unique job at the moment assumes that all the arguments of a job can be stringified. If there's a unknown/random jobUUID as part of every job, the semantics of finding and deleting jobs by their args will need to change.
So, the typical use case is to offload heavy job to resque.
So in my application, I need to poll until the job is finished, current documentation does not provide an example.