Closed gmcnaught closed 8 years ago
This should resolve #24 and #21
@gmcnaught so now we could use the same schudle from multiple processes, and it will be activated only one ? i.e. I want to have mulitple hosts running the schuduler code (for redundencey/backup) is there a suggested way of achiving it ?
I was thinking something like those from the celery world: https://github.com/TrackMaven/celery-once or https://github.com/hassa/BeatCop
Yes, it would be nice to have some confirmation of this behaviour.
ו'll try to update once I tested it. (It's crucial for my implementation)
Yes, in my use case I specifically tested this across multiple servers connected to the same resis instance.
On Friday, October 14, 2016, fruch notifications@github.com wrote:
ו'll try to update once I tested it. (It's crucial for my implementation)
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lykmapipo/kue-scheduler/pull/39#issuecomment-253705195, or mute the thread https://github.com/notifications/unsubscribe-auth/ABTsf3zlyI5gt_QwEiDAUniwFdxuGjHuks5qzwC1gaJpZM4JLuIA .
@gmcnaught and ? kue-unique is working as expected ? Can you share some code snippet how you did you tested ?
I've test it and it's not working as I expected: I have this setup (see below) and once I activate all three, I see my task happening twice every 10sec
How can I lock it so it won't happend twice ? (and still keep cron1 cron2 for backup of one anther)
//cron1.js
var job = Queue
.createJob('upm-scanning', {to: 'any'})
.attempts(1)
.backoff({delay: 10000, type: 'fixed'})
.priority('normal');
//schedule it to run every 10 seconds
Queue.every('10 seconds', job);
//cron2.js
var job = Queue
.createJob('upm-scanning', {to: 'any'})
.attempts(1)
.backoff({delay: 10000, type: 'fixed'})
.priority('normal');
//schedule it to run every 10 seconds
Queue.every('10 seconds', job);
//worker.js
queue.process('upm-scanning', upmScanningTask);
Your example code is creating two jobs, so two jobs are being run. You can either follow a producer-consumer model and only run queue.createJob in a single worker, or add a unique constraint so that any of your workers attempting to create the same job will instead use the already created job.
On Friday, October 14, 2016, fruch notifications@github.com wrote:
I've test it and it's not working as I expected: I have this setup (see below) and once I activate all three, I see my task happening twice every 10sec
How can I lock it so it won't happend twice ? (and still keep cron1 cron2 for backup of one anther)
//cron1.js var job = Queue .createJob('upm-scanning', {to: 'any'}) .attempts(1) .backoff({delay: 10000, type: 'fixed'}) .priority('normal');
//schedule it to run every 10 seconds Queue.every('10 seconds', job);
//cron2.js var job = Queue .createJob('upm-scanning', {to: 'any'}) .attempts(1) .backoff({delay: 10000, type: 'fixed'}) .priority('normal');
//schedule it to run every 10 seconds Queue.every('10 seconds', job);
//worker.js queue.process('upm-scanning', upmScanningTask);
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/lykmapipo/kue-scheduler/pull/39#issuecomment-253804822, or mute the thread https://github.com/notifications/unsubscribe-auth/ABTsfz5Ku-_Go8aauP2WL8J_jhlvu2jMks5qz4ksgaJpZM4JLuIA .
unique did that trick, thanks. (for some reason when I first tried it, it seems like it's not working, and triggering the job only once)
Firstly: did a simple clean up of tests so they pass 90% of the time. -- There seems to be a situation with mocha where the connection to redis can be closed and an event fire, which causes tests to fail that I did not resolve.
Secondly: added redlock around the 'every' and _onJobExpiry to force only a single worker that gets the key expiration method to schedule the next run of the job. This may need an improvement / modification for clustered redis scenarios, as redlock needs to know all of the redis servers it needs to get quorum from on a lock.