Closed VictorioBerra closed 9 months ago
So you would have 1 .NET process with Coravel running. Once you decide what you "tick" is (e.g. 1 second, 1 minute, etc.) then you create a Coravel invocable/job that is scheduled to run every "tick".
Let's say every minute (our "tick" we've decided) the Coravel job is triggered: all it does is send a message to a message broker - something maybe called SchedulerTicked
or something.
Depending on your message broker technology (kafka, redis, rabbitmq, azure service bus, etc.) you configure a multi-cast / fan-out exchange (in rabbitmq for example) where SchedulerTicked
is sent to multiple queues. Each queue has 1 or more consumers, and are probably related to separate systems or bounded contexts.
This way multiple systems can be informed of the scheduler tick, but be decoupled from the other systems.
Does that explanation make sense?
@jamesmh Yes but I am still confused on why they need to know about the tick at all, especially if its a generic tick not unique to kicking off any particular job.
Slack's recent article about their own scheduling setup (very similar to the one I mention at the end of my article) explains a bit around this technique.
Essentially it's about decoupling scheduling logic from the actual business logic / real-work. If the same process is doing scheduling and the real-work, and your jobs are potentially CPU or memory intensive, then your job processing risks affecting the performance of your scheduler.
It also means that separate teams wouldn't necessarily have to invent their own scheduler - .e.g there's a platform already available for them to plug-into via the async messages already coming from the scheduler. So there's so nice loose coupling going on there too.
You said:
I am still confused on why they need to know about the tick at all, especially if its a generic tick not unique to kicking off any particular job.
The "tick" has to come from somewhere... this would be one example of where that tick could come from. There are other options available: OS cron, Windows OS jobs, etc. The same principle would apply though - scheduling logic is coming from a different process than the business logic.
Hello, I see you do not have GitHub Discussions enabled, so I am creating an issue to ask.
Regarding the blog post: https://www.jamesmichaelhickey.com/high-performance-dotnet-cron-jobs/?utm_source=newsletter.csharpdigest.net&utm_medium=newsletter&utm_campaign=high-performance-net-cron-jobs
How does this trigger consumer jobs? Would a consumer still need to manage the distributed lock?