Closed SwampFalc closed 3 years ago
Apologies for the long delay responding.
I'm trying to understand what the use case is here so that I can understand the problem. Is it that you're trying to parse logs and are not able to find the metrics you're looking for?
If that's the case, then what I think you want is to return UNKNOWN from your check, which you can do by raising an exception. I think this may not be clearly described in the documentation, but if you read closely you'll see that the guarded()
decorator will cause the check script to return UNKNOWN if any uncaught exception is raised.
Does that solve your problem?
I'm on holiday, I'll try to remember to get back to this in two weeks.
Since the project has been on the backburner for a few months now, I don't think I can usefully continue this discussion right now. I do vaguely remember finding out about the UNKNOWN return, but something about that didn't quite work out 100%... But I cannot remember exactly what. So feel free to close this, if you wish. I'll come back if/when the project goes forward again.
Allow me to first explain a bit the work I'm doing. I'm trying to build a nagios check based on the contents of a logfile of a different program I wrote. Now I'm maybe being way too defensive, but I'm trying not to assume anything about the correct format of said log.
This of course means that the
Resource
I built can yield metrics that represent broken/unknown state. So far I have done this by yieldingMetrics
whose value isNone
, but for example, aScalarContext
chokes on that because of courseNone
cannot be compared to the ranges.So I was pondering writing my own
ScalarContext
and whether or notMetrics
that areNone
even make sense, when I had the idea that I could just not yield thoseMetrics
.After all, you can provide as many
Contexts
as you want to aCheck
, only those that get matched up to aMetric
will becomeResults
.But as I was continuing along that train of thought, it struck me that this would lead to an issue. Missing
Metrics
do not lead toResults
, so whenCheck
tries to determine its state, there is no way to have a missingMetric
lead to a critical state, and there's nothing you can do in aSummary
to fix it. Yes I could subclassCheck
, but at some point I start feeling like I'm rewriting the entire library...Which to some degree I wouldn't mind doing, but at that point I would probably turn it into a pull request. But of course, if you as the maintainer disagree with the choices made, that would be a lot of wasted effort...
And so we reach the discussion if there should not be an "official" stance about the proper practice in this case.
Option 1 - Metrics whose value is
None
. If this is chosen as best practice, I would recommend changing the ScalarContext to take this into account, so that people have a proper working example of the best practice.Option 2 - Do not yield missing metrics. This would then require support for this chosen path in
Check
, so that a missing metric can properly lead to aCritical
state. If this is not chosen, perhapsCheck
should still be adapted so that eachContext
has to be paired with aMetric
(in addition to vice versa), to more clearly dissuade this approach.Option 3 - Inspired by #3, actually. Yield a
Metric
with a different name, and haveContexts
with both possible names ready to pair up and give aResult
. This approach could do with some support, probably inCheck
.Like I said, I'm willing to put my money where my mouth is and help code, but we should first find a consensus on what is the best practice in this case.