Thanks for this awesome lib first of all. We've been happily using it in production more than years without any issues. However, recently we found ourselves hitting the wall with a use case.
One of our team members generated a scheduled job and used shedlock to lock it. However, since he couldn't predict the actual operation time, he gave something too big for the lockAtMostFor attribute. And when the health check failed and k8 restarted the pod during operation, we found ourselves with a job which was locked more than 12 hours and lots of data to process.
Up until now, we haven't had any job without an unpredictable working time to be honest. Quickly digging, this awesome library has a future called KeepAliveLockProvider as I see.
So, yeah we immediately implemented it. It's mind blowing how easy to implement it TBF 🤯 and works perfectly.
At this point, a question I have related with this is that what should be the default executor count for this? Is singleThreadedExecutor enough?
We have more than 60 jobs running paralel.
Should it be equal to job size?
Cheers.
@Bean(name = "seasonal-lock-provider")
public LockProvider lockProvider(DataSource dataSource) {
return new KeepAliveLockProvider(getJdbcTemplateLockProvider(dataSource), Executors.newSingleThreadScheduledExecutor());
}
private JdbcTemplateLockProvider getJdbcTemplateLockProvider(DataSource dataSource) {
return new JdbcTemplateLockProvider(
JdbcTemplateLockProvider.Configuration.builder()
.withJdbcTemplate(new JdbcTemplate(dataSource))
.withTableName(DEFAULT_TABLE_NAME)
.withDbUpperCase(true)
.usingDbTime()
.build()
);
}
Hey Lukas,
Thanks for this awesome lib first of all. We've been happily using it in production more than years without any issues. However, recently we found ourselves hitting the wall with a use case. One of our team members generated a scheduled job and used shedlock to lock it. However, since he couldn't predict the actual operation time, he gave something too big for the lockAtMostFor attribute. And when the health check failed and k8 restarted the pod during operation, we found ourselves with a job which was locked more than 12 hours and lots of data to process. Up until now, we haven't had any job without an unpredictable working time to be honest. Quickly digging, this awesome library has a future called
KeepAliveLockProvider
as I see. So, yeah we immediately implemented it. It's mind blowing how easy to implement it TBF 🤯 and works perfectly.At this point, a question I have related with this is that what should be the default executor count for this? Is
singleThreadedExecutor
enough? We have more than 60 jobs running paralel. Should it be equal to job size? Cheers.