Closed adamreisnz closed 3 years ago
Appears to have this underlying meta data in the error:
{ actual: 'read ECONNRESET',
code: 'ECONNRESET',
retrying: true,
attempts: 1 }
This seems to create a lot of noise with log events frequently failing. I suspect the underlying mechanism to post HTTP log requests in your new logger version may not be able to cope with the amount of log lines we generate.
Temporary work around is to add an error even listener to avoid the app crashing
logDNALogger.on('error', () => {
//Ignore for now
});
Hi there. Yes, as you determined, this is a connection-based error when sending lines to the LogDNA servers. We will improve the error message, but all the details we have are what's in the meta data (namely the code
). We are looking to make some tweaks on the server to cut down on those errors since lots seem to be keepalive-related errors.
Yes, it can be a bit noisy, but we're not a fan of swallowing errors (we plan to address as many as we have control over). In the meantime, the retrying: true
signifies that they will be retried, so there is no data loss. That being said, rather than ignore the whole error, we suggest paying attention to that property:
logDNALogger.on('error', (evt) => {
if (evt.retrying) return
// This is a non-retryable error worth looking at
});
As to the error crashing the app, that feels like something outside of the client itself. They're just error events and should have no ability to crash anything. Feel free to provide more context around that if you wish, and we can dig through that together.
This seems to create a lot of noise with log events frequently failing. I suspect the underlying mechanism to post HTTP log requests in your new logger version may not be able to cope with the amount of log lines we generate.
And just to quell your fears a little bit - the underlying mechanism did not change from the old version. The difference being that the old version did not report these types of errors. I can assure you that the client (and our servers) can handle your traffic, and the fixes necessary are going to happen on the server in relation to the keepalive mechanism.
We will have a "fix" up for this shortly, at least to change the error message for a bit more clarity. Thanks!
This has been published in version 2.0.0
. Since the error message was previously not very descriptive, we've updated the message to reflect a less critical "error" in the case of a retry-able condition. Since people may have already coded their events to look for the old message, changing this is why this release was major
, but otherwise it has no breaking changes. By default, retry-able messages will not be emitted as an error
unless the ignoreRetryableErrors
is explicitly set to false
.
In the meantime, we have work slotted to reduce those retry-able connection errors by tweaking keepalive settings on the server.
Great, thank you. The fact that this was suddenly emitting errors (which was not the case in the old logger) was what was causing our app to crash, because we never listened for those errors. I will give 2.0.0 a try.
Is there any plan to reflect these changes to logdna-winston
as well? Thanks!
Is there any plan to reflect these changes to
logdna-winston
as well? Thanks!
Yes, certainly! We'll get those bumped shortly. Glad the changes are being received well, thanks.
@mertakozcan both winston and bunyan packages have been bumped to include the latest version of the logger.
This error is not very detailed. No idea what is happening here or why it's failing. An internal error? Configuration error? API key invalid?
Also, this is completely crashing the app, even when the log line is wrapped in a try/catch block. A failed logger log should not be bringing the whole app down imo.
Happening with version 1.3.2 and 1.3.3