Open james-johnston-thumbtack opened 2 years ago
hello @james-johnston-thumbtack , Thank you for the report. Basically isAlive was designed to make sure the "core" of the scheduler is working properly. I am not sure that it is a actually a good idea and perhaps too much complicated. Most of the time people just add a "dummy" endpoint on the http server, but the endpoint is nothing nothing...so not good also. I think we can just add a random suffice to the id, so we will be able to identify these "probe" schedules.
To be honest, I just changed my AWS configuration to use the /info
endpoint as a health check instead, to work around this problem and unblock myself.
Yeah, I agree and think adding some suffix as an ID could work to fix the existing implementation (an auto-incrementing ID would avoid collisions). I guess some things to think about might be if livenessChan needs a larger buffer. Also, if the "timeout for isalive probe from liveness channel" happens but the scheduler still does its thing, then there could be a memory leak if the livenessChan is not drained. (That is, the "timeout for isalive probe from liveness channel" error will abandon the event in the livenessChan if it were to later show up there.)
But I also think having good prometheus metrics is a good if not better measure of health; users can set up alerts on those. I think that is actually my preference, and simple response indicating "200 OK" is good enough.
If two clients concurrently call the
/liveness
route on the REST API, one of them will time out. This is easy to reproduce from the command line. Note that I use a&
after the firstcurl
command so that it runs asynchronously alongside the secondcurl
command. (localhost:17303
is the REST API for kafka scheduler for me)The first one completes successfully, as expected. But the second one times out. The server logs show a line like:
This presents a problem if multiple things in a distributed system are simultaneously checking the health. For example, EC2 target health checks documentation points out that "Health checks for a Network Load Balancer are distributed and use a consensus mechanism to determine target health. Therefore, targets receive more than the configured number of health checks."
As best I can tell, the issue is that the
IsAlive
function fakes a scheduled message from Kafka at https://github.com/etf1/kafka-message-scheduler/blob/9b44c3c8668f117c243678f5974eabf03fd6d174/scheduler/scheduler.go#L207 Which has a hard-coded ID of||is-alive||
: https://github.com/etf1/kafka-message-scheduler/blob/9b44c3c8668f117c243678f5974eabf03fd6d174/scheduler/helper.go#L41 Two concurrent calls toIsAlive
will result in two timers with the same ID being added. But the timer code deduplicates those using the ID: https://github.com/etf1/kafka-message-scheduler/blob/9b44c3c8668f117c243678f5974eabf03fd6d174/internal/timers/timers.go#L61-L62 ..... and since the ID is hard-coded, the secondIsAlive
call stomps over the firstIsAlive
call in timers, and thus only one timer event is ever returned in thelivenessChan
.