Closed KrishnaPG closed 9 years ago
The existing q.pause()
functionality is intended to stop the queue from obtaining new work from the server, not to stop existing workers from continuing to work on jobs that they have already obtained.
That is interesting functionality, but not what q.pause()
was designed to do (in any case the worker needs to be designed to make this possible... How would you stop the loop in your worker code above?)
If you want to pause running jobs based on calls to q.pause()
you can check the queue status with its q.paused
attribute. Do not modify this value, it is read only, and your worker code will need to periodically check it while running because there are no "reactive" variables in pure node.js
If you want to remotely pause a running worker, you'll need to implement that yourself somehow. The best way will depend on your app's requirements. job-collection is completely agnostic to how workers are implemented, provisioned. configured, and managed. There are many possible ways to do this, and every environment will be different and have different requirements. Job collection is not a cloud resource or cluster manager, but it is complimentary to such existing solutions.
The existing q.pause() functionality is intended to stop the queue from obtaining new work from the server, not to stop existing workers from continuing to work on jobs that they have already obtained.
Thanks @vsivsi That put things in perspective.
As for your comment
Job collection is not a cloud resource or cluster manager, but it is complimentary to such existing solutions"
but this package comes close (than most others out there, even from non-meteor world) to having the most features that a good worker/scheduler is supposed to have.
Persistence
, Fault tolerance
, Progress indication
, Distributed workers
- are the key factors for any successful one, and this package have them.
Metrics
and Polyglot
are the only two other major features that this one is missing on. Else, this could start rivaling the big ones out there (Celery).
Current 'client stats' is ok in the Metrics
section - but a full dashboard capability that monitors and measures the complete throughput of systems is not far if we can achieve polyglot
(which lets us reuse Graphite, Grafana so on...)
If Distributed Workers
can start supporting Bridged servers / workers
that will pave way to polyglot
clients (without the need for DDP), which in turn will make the metrics
much easy.
For example, submitting the jobs through REST and accepting work through MQTT/ZMQ are a good starting point.
Not sure what your future plans for this package are (you may be occupied and busy for not taking it any further), but would like to thank you much for all your hardwork done till now on this and making it available for all to use this. .
In case you are ok to consider further expansion, such as exposing the queue through MQTT/ZMQ (through node.js ddp) for workers, I would be able to help. It would open up this package to the IOT (Internet of Things) world.
I'm glad you see the potential in job-collection. I will be pretty busy with other things until the end of the year, but I will always consider PRs that extend the system in useful and logically consistent ways. One of my goals for job-collection was for it to be relatively simple and "Meteor-like" in its operation, something that all of the other systems I encountered were lacking. So there is likely to be some tension in future expansion of features between making it more capable and keeping is simple to use and understand for apps that (at least initially) have simpler requirements.
Trying to follow your nodejs worker example.
In the example, there is
obs.changed
which gets called whenever the jobs state change to, say,pause
. Now, beyond this point, it is not clear how to make the actual worker function know about this pause state.Then, while that worker function is being executed, how to make it keep track of the job's pause state?