lausser / check_logfiles

A plugin (monitoring-plugin, not nagios-plugin, see also http://is.gd/PP1330) which scans logfiles for patterns.
https://omd.consol.de/docs/plugins/check_logfiles/
GNU General Public License v2.0
46 stars 27 forks source link

criticalthreshold not respected? #81

Open Napsty opened 2 months ago

Napsty commented 2 months ago

Seen a weird problem today where check_logfiles correctly identifies error patterns in a log file. The config file sets multiple options, including criticalthreshold=10, yet the plugin reports a CRITICAL status when finding a number of error lines below the threshold.

Config file:

$seekfilesdir = '/var/tmp/check_logfiles';
# where the state information will be saved.

$protocolsdir = '/var/tmp/check_logfiles';
# where protocols with found patterns will be stored.

$scriptpath = '/usr/lib64/nagios/plugins';
# where scripts will be searched for.

@searches = (
  {
    tag => 'icinga2_client_handshake_errors',
    logfile => '/var/log/icinga2/icinga2.log',
    criticalpatterns => [
      'Client TLS handshake failed'
    ],
    options => 'noprotocol,nosticky,nosavethresholdcount,nosavestate,criticalthreshold=10,warningthreshold=5,maxage=15m',
  }
);

Command line usage would be: '/usr/bin/sudo' '/usr/lib64/nagios/plugins/check_logfiles' '--config' '/etc/nagios/logfile_icinga.cfg' '--tag' 'icinga2_client_handshake_errors'.

The Icinga2 alert history shows that the status of this service check switches to critical already after finding just a single error line within the run.

image

To my understanding this should only be the case if 10 or more error lines were found for this run? Or am I misunderstanding something or potentially breaking things with one of the other options?

codeautopilot[bot] commented 2 months ago

Potential solution

The plan to solve the bug involves verifying and correcting the parsing and handling of the criticalthreshold option within the check_logfiles.pl script and ensuring that the logic for applying this threshold is correctly implemented in the Nagios::CheckLogfiles module. The issue appears to be that the criticalthreshold is either not being parsed correctly or not being applied correctly, leading to a CRITICAL status being reported even when the number of error lines is below the threshold.

What is causing this bug?

The bug is likely caused by either incorrect parsing of the criticalthreshold option from the configuration file or flawed logic in the Nagios::CheckLogfiles module that handles this threshold. The criticalthreshold option should dictate the number of error lines required to trigger a CRITICAL status, but it seems that this threshold is not being respected, causing premature CRITICAL alerts.

Code

To address this issue, we need to:

  1. Verify that the criticalthreshold option is correctly parsed from the configuration file.
  2. Ensure that the criticalthreshold is correctly applied in the Nagios::CheckLogfiles module.
  3. Add debugging statements to trace the flow and application of the criticalthreshold logic.

Step 1: Verify Parsing

Ensure that the criticalthreshold option is correctly parsed and passed to the Nagios::CheckLogfiles object in check_logfiles.pl.

# check_logfiles.pl

# Add debugging statement after parsing command-line options
print STDERR "Parsed criticalthreshold: $commandline{criticalthreshold}\n" if exists $commandline{criticalthreshold};

# Ensure criticalthreshold is included in the options passed to Nagios::CheckLogfiles
if (my $cl = Nagios::CheckLogfiles->new({
    ...
    options => join(',', grep { $_ }
        ...
        $commandline{criticalthreshold} ? "criticalthreshold=".$commandline{criticalthreshold} : undef,
        ...
    ),
    ...
})) {
    ...
}

Step 2: Ensure Correct Application

Review and correct the logic within the Nagios::CheckLogfiles module to ensure that the criticalthreshold is correctly applied.

# Nagios/CheckLogfiles.pm

# Add debugging statement to trace the application of criticalthreshold
sub check_thresholds {
    my ($self, $count) = @_;

    print STDERR "Checking thresholds with count: $count and criticalthreshold: $self->{criticalthreshold}\n";

    if ($count >= $self->{criticalthreshold}) {
        return 'CRITICAL';
    } elsif ($count >= $self->{warningthreshold}) {
        return 'WARNING';
    } else {
        return 'OK';
    }
}

Step 3: Add Detailed Debugging Statements

Add more detailed debugging statements to trace the internal states and threshold counts more precisely.

# Nagios/CheckLogfiles.pm

# Add debugging statements around threshold checks
sub analyze_logfile {
    my ($self, $logfile) = @_;

    my $count = 0;
    while (my $line = <$logfile>) {
        if ($line =~ /$self->{criticalpattern}/) {
            $count++;
        }
    }

    print STDERR "Total critical pattern matches: $count\n";

    return $self->check_thresholds($count);
}

How to replicate the bug

To replicate the bug, follow these steps:

  1. Create a configuration file with the criticalthreshold option set to a specific value (e.g., 10).
  2. Run the check_logfiles plugin with the configuration file and a log file containing fewer error lines than the criticalthreshold.
  3. Observe that the plugin reports a CRITICAL status even though the number of error lines is below the threshold.

Example configuration file (logfile_icinga.cfg):

$seekfilesdir = '/var/tmp/check_logfiles';
$protocolsdir = '/var/tmp/check_logfiles';
$scriptpath = '/usr/lib64/nagios/plugins';

@searches = (
  {
    tag => 'icinga2_client_handshake_errors',
    logfile => '/var/log/icinga2/icinga2.log',
    criticalpatterns => [
      'Client TLS handshake failed'
    ],
    options => 'noprotocol,nosticky,nosavethresholdcount,nosavestate,criticalthreshold=10,warningthreshold=5,maxage=15m',
  }
);

Command to run the plugin:

'/usr/bin/sudo' '/usr/lib64/nagios/plugins/check_logfiles' '--config' '/etc/nagios/logfile_icinga.cfg' '--tag' 'icinga2_client_handshake_errors'

By following these steps, you should be able to observe the bug and verify that the solution correctly addresses the issue.

Click here to create a Pull Request with the proposed solution

Files used for this task:

Changes on t/007threshold.t # Analysis Report for `t/007threshold.t` ## Overview The file `t/007threshold.t` is a test script written in Perl for the `check_logfiles` plugin. It tests various scenarios to ensure that the `criticalthreshold` and `warningthreshold` options are respected. The script uses the `Nagios::CheckLogfiles::Test` module to simulate log file entries and verify the plugin's behavior. ## Key Observations 1. **Test Initialization**: The script initializes a `Nagios::CheckLogfiles::Test` object with specific configurations, including `criticalpatterns`, `warningpatterns`, `criticalthreshold`, and `warningthreshold`. 2. **Test Cases**: The script contains multiple test cases that simulate different log file scenarios and verify the plugin's response. Each test case: - Resets the test environment. - Writes log entries to the simulated log file. - Runs the `check_logfiles` plugin. - Verifies the plugin's output against expected results. 3. **Threshold Handling**: The script specifically tests the handling of `criticalthreshold` and `warningthreshold` by: - Writing a specific number of log entries that match the critical and warning patterns. - Verifying that the plugin only reports a CRITICAL or WARNING status when the number of matching entries exceeds the respective thresholds. 4. **Options Handling**: The script also tests the behavior when the `nosavethresholdcount` option is set, ensuring that the threshold counts are not saved between runs. ## Potential Issues 1. **Test Coverage**: The script appears to cover various scenarios for `criticalthreshold` and `warningthreshold`. However, it may not cover all edge cases, such as: - Rapid successive log entries that might affect the threshold counting. - Interactions with other options that might influence threshold behavior. 2. **Debugging Information**: The script includes diagnostic messages (`diag`) to trace the execution flow and internal states. These messages can help identify issues but might need to be more detailed in some cases. ## Recommendations 1. **Add More Test Cases**: To ensure comprehensive coverage, consider adding test cases for: - Edge cases where log entries are added in rapid succession. - Different combinations of options that might affect threshold behavior. 2. **Enhance Debugging**: Add more detailed diagnostic messages to trace the internal states and threshold counts more precisely. This can help identify subtle issues that might not be immediately apparent. 3. **Review Threshold Logic**: Ensure that the logic for handling `criticalthreshold` and `warningthreshold` in the `check_logfiles` plugin is correctly implemented and aligns with the test cases. ## Conclusion The `t/007threshold.t` file appears to be relevant for the reported bug regarding the `criticalthreshold` option. The test cases in this file should help identify whether the plugin correctly respects the `criticalthreshold` setting. Further enhancements to test coverage and debugging information can improve the reliability of these tests.
Changes on t/007thresholdok.t # Analysis Report for `t/007thresholdok.t` ## Overview The file `t/007thresholdok.t` is a Perl test script designed to validate the behavior of the `check_logfiles` plugin, particularly focusing on the handling of threshold counts for critical and warning patterns in log files. The script tests two main scenarios: 1. When the `nosavethresholdcount` option is set. 2. When the `savethresholdcount` option is set. ## Key Observations 1. **Test Cases for `nosavethresholdcount`**: - The script simulates log entries and checks if the plugin correctly resets the threshold count when an "okpattern" is found. - It verifies that the plugin does not trigger a CRITICAL status unless the number of critical patterns exceeds the `criticalthreshold`. 2. **Test Cases for `savethresholdcount`**: - The script ensures that the threshold count is saved across runs. - It checks if the plugin correctly accumulates the count of critical patterns over multiple runs and triggers a CRITICAL status when the accumulated count exceeds the `criticalthreshold`. ## Potential Issues 1. **Incorrect Handling of `criticalthreshold`**: - The test cases seem to be well-structured to validate the `criticalthreshold` behavior. However, the issue reported by the user suggests that the plugin might be triggering a CRITICAL status prematurely. - The test cases might not be covering all edge cases or there might be a discrepancy between the test environment and the actual usage scenario. 2. **Debugging and Logging**: - The test script uses `diag` statements to log intermediate results and states. These logs are crucial for understanding the flow and identifying where the logic might be failing. - It might be beneficial to add more detailed logging around the threshold count checks to ensure that the counts are being correctly reset or accumulated as per the options set. ## Recommendations 1. **Review and Enhance Test Cases**: - Ensure that the test cases cover scenarios where the `criticalthreshold` is set but the plugin triggers a CRITICAL status with fewer error lines. - Add test cases that simulate the exact conditions described in the user's issue, including the specific log patterns and options used. 2. **Add Detailed Debugging Statements**: - Enhance the existing `diag` statements to provide more granular details about the threshold counts and the conditions being checked. - Specifically, log the state of the threshold counts before and after each run to trace any discrepancies. 3. **Cross-Verify with Actual Usage**: - Compare the test scenarios with the actual command-line usage provided by the user to ensure that the test environment accurately reflects the real-world usage. - Validate if any additional options or configurations might be influencing the behavior. ## Conclusion The test script `t/007thresholdok.t` is designed to validate the threshold handling logic of the `check_logfiles` plugin. While the existing test cases are comprehensive, there might be edge cases or discrepancies that are not covered. Enhancing the test cases and adding detailed debugging statements will help in identifying and resolving the issue reported by the user.
Changes on plugins-scripts/check_logfiles.pl # Bug Analysis Report for `check_logfiles.pl` ## Overview The user reported an issue where the `check_logfiles` plugin is not respecting the `criticalthreshold` option specified in the configuration file. The plugin reports a CRITICAL status even when the number of error lines found is below the threshold. ## Analysis The `check_logfiles.pl` script is responsible for parsing log files and identifying patterns that match specified criteria. The configuration file provided by the user includes a `criticalthreshold` option, which should dictate the number of error lines required to trigger a CRITICAL status. ### Key Areas to Investigate 1. **Configuration Parsing**: Ensure that the `criticalthreshold` option is correctly parsed from the configuration file. 2. **Threshold Handling**: Verify that the logic for handling the `criticalthreshold` is correctly implemented and applied during the log file analysis. 3. **Debugging Statements**: Add debugging statements to trace the flow of the threshold logic. ### Configuration Parsing The script uses the `Getopt::Long` module to parse command-line options and configuration parameters. The `criticalthreshold` option is included in the list of parameters: ```perl "criticalthreshold=i", ``` ### Threshold Handling The script constructs a `Nagios::CheckLogfiles` object with various parameters, including the `criticalthreshold`: ```perl if (my $cl = Nagios::CheckLogfiles->new({ ... options => join(',', grep { $_ } ... $commandline{criticalthreshold} ? "criticalthreshold=".$commandline{criticalthreshold} : undef, ... ), ... })) { ... } ``` ### Potential Issues 1. **Incorrect Parsing**: The `criticalthreshold` option might not be correctly parsed or passed to the `Nagios::CheckLogfiles` object. 2. **Logic Implementation**: The logic within the `Nagios::CheckLogfiles` module that handles the `criticalthreshold` might be flawed or not correctly applied. ### Debugging Statements To trace the issue, we can add debugging statements around the critical areas where the `criticalthreshold` is parsed and applied. ## Recommendations 1. **Verify Parsing**: Ensure that the `criticalthreshold` option is correctly parsed and passed to the `Nagios::CheckLogfiles` object. 2. **Add Debugging Statements**: Add debugging statements to trace the flow and application of the `criticalthreshold` logic. 3. **Review Logic**: Review the logic within the `Nagios::CheckLogfiles` module to ensure that the `criticalthreshold` is correctly implemented. ### Example Debugging Statements Add the following debugging statements to trace the `criticalthreshold`: ```perl # After parsing command-line options print STDERR "Parsed criticalthreshold: $commandline{criticalthreshold}\n" if exists $commandline{criticalthreshold}; # Before creating the Nagios::CheckLogfiles object print STDERR "Creating Nagios::CheckLogfiles with criticalthreshold: $commandline{criticalthreshold}\n" if exists $commandline{criticalthreshold}; # Inside the Nagios::CheckLogfiles module (not shown in this file) # Add similar debugging statements to trace the application of the criticalthreshold ``` ## Conclusion The issue likely stems from either incorrect parsing of the `criticalthreshold` option or flawed logic in its application. By adding debugging statements and verifying the logic, we can pinpoint the exact cause and implement a fix. Further investigation into the `Nagios::CheckLogfiles` module may be necessary to ensure the threshold logic is correctly applied.
Napsty commented 2 months ago

Threshold option works correctly. The log file actually contains a large number of occurrences of found patterns. The resulting output "3 errors" means actually at least 3 x 10 (threshold) errors detected. A verbose run shows all the matched patterns and also shows which event is accounted for (count this match):

[root@linux PROD ~]# /usr/lib64/nagios/plugins/check_logfiles -f /etc/nagios/logfile_icinga.cfg --tag=icinga2_client_handshake_errors -v
Fri Sep 20 15:47:45 2024: ==================== /var/log/icinga2/icinga2.log ==================
Fri Sep 20 15:47:45 2024: found seekfile /var/tmp/check_logfiles/logfile_icinga._var_log_icinga2_icinga2.log.icinga2_client_handshake_errors
Fri Sep 20 15:47:45 2024: LS lastlogfile = /var/log/icinga2/icinga2.log
Fri Sep 20 15:47:45 2024: LS lastoffset = 163782719 / lasttime = 1726839698 (Fri Sep 20 15:41:38 2024) / inode = 64781:23
Fri Sep 20 15:47:45 2024: found private state $VAR1 = {
          'lastruntime' => 1726839643,
          'runcount' => 61099,
          'matchingpattern' => 'Client TLS handshake failed',
          'logfile' => '/var/log/icinga2/icinga2.log'
        };

Fri Sep 20 15:47:45 2024: the logfile grew to 164879277
Fri Sep 20 15:47:45 2024: opened logfile /var/log/icinga2/icinga2.log
Fri Sep 20 15:47:45 2024: logfile /var/log/icinga2/icinga2.log (modified Fri Sep 20 15:47:43 2024 / accessed Fri Sep 20 15:41:41 2024 / inode 23 / inode changed Fri Sep 20 15:47:43 2024)
Fri Sep 20 15:47:45 2024: relevant files: icinga2.log
Fri Sep 20 15:47:45 2024: moving to position 163782719 in /var/log/icinga2/icinga2.log
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:41:48 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 9
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:41:58 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 8
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:42:08 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 7
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:42:18 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 6
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:42:28 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 5
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:42:38 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 4
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:42:49 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 3
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:43:08 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 2
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:43:18 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 1
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:43:28 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: count this match
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:43:38 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 9
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:43:48 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 8
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:43:58 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 7
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:44:08 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 6
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:44:18 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 5
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:44:29 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 4
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:44:39 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 3
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:44:49 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 2
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:44:59 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 1
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:45:09 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: count this match
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:45:19 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 9
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:45:29 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 8
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:45:39 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 7
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:45:49 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 6
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:45:59 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 5
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:46:09 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 4
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:46:19 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 3
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:46:29 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 2
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:46:39 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 1
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:46:49 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: count this match
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:46:59 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 9
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:47:09 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 8
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:47:19 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 7
Fri Sep 20 15:47:45 2024: MATCH CRITICAL Client TLS handshake failed with [2024-09-20 15:47:30 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled
Fri Sep 20 15:47:45 2024: skip match and the next 6
Fri Sep 20 15:47:45 2024: stopped reading at position 164879277
Fri Sep 20 15:47:45 2024: keeping position 164879277 and time 1726840063 (Fri Sep 20 15:47:43 2024) for inode 64781:23 in mind
CRITICAL - (3 errors) - [2024-09-20 15:46:49 +0200] critical/ApiListener: Client TLS handshake failed (to [10.50.60.70]:5665): Operation canceled ...|'icinga2_client_handshake_errors_lines'=7560 'icinga2_client_handshake_errors_warnings'=0 'icinga2_client_handshake_errors_criticals'=3 'icinga2_client_handshake_errors_unknowns'=0

Is there a way that the output shows the actual number of errors (34) instead of 3?