Open onedr0p opened 7 months ago
Thank you for creating the issue.
Now that external endpoints have been implemented (#722, #724), this should probably be the next feature related to external endpoints that gets implemented as they kind of go hand in hand, especially for those using external endpoints to test connectivity. If there's no connectivity, Gatus' API won't be reachable, which means that Gatus wouldn't be able to trigger an alert without this feature.
The feature in question should allow the user to configure a duration under which an update is expected to be received.
Should the duration elapse with no new status update, a status should be created to indicate a failure to receive an update within the expected time frame.
This should in turn cause https://github.com/TwiN/gatus/blob/28339684bfd9bb0e13bed4409f88e64bff31f3b2/watchdog/alerting.go#L13-L22 to be called, which would then lead to handleAlertsToTrigger
being called (due to the new result indicating failure to receive an update having its Success
field set to false
), incrementing NumberOfFailuresInARow
https://github.com/TwiN/gatus/blob/28339684bfd9bb0e13bed4409f88e64bff31f3b2/watchdog/alerting.go#L24-L27 & triggering whichever alerts should be triggered.
The only proper name I can think of for this feature is "dead man's switch", but as silly as it may sound, I don't like how that'd look on the configuration:
external-endpoints:
- name: ...
dead-man-switch:
blackout-duration-until-automatic-failure: 1h
alerts:
- type: slack
send-on-resolved: true
Another consideration to make is the interaction between this feature and maintenance. While the maintenance period should prevent alerts from being triggered, should failure status be pushed anyways? Perhaps this should be an additional parameter on the maintenance configuration (e.g. maintenance.silence-dead-man-switch
)?
Some food for thoughts.
The only proper name I can think of for this feature is "dead man's switch", but as silly as it may sound, I don't like how that'd look on the configuration:
I've seen other services call this heartbeat
instead of dead man's switch
, and they also have a grace period that is configurable.
external-endpoints:
- name: ...
heartbeat:
interval: 5m
grace-period: 5m
alerts:
- type: slack
send-on-resolved: true
I would love to see a feature similar to this feature request, just like implemented in healthchecks.io
This would allow to provide a cron schedule and then monitor crons and alert for crons that do not run or jobs that take too long to complete.
An example configuration might look like this:
external-endpoints:
- name: ...
heartbeat:
schedule: 0 0 15 * *
grace-period: 2m
Originally posted by @r3mi in https://github.com/TwiN/gatus/issues/722#issuecomment-2041015420