Open roberttod opened 7 months ago
Hi @roberttod!
Thank you for reaching out! I think you are right that transport running in a worker thread could be causing issues here. I'm adding this to our internal tracking. We'll take a look and keep you updated here on GitHub.
Thanks again for raising this.
Hi @roberttod!
Thank you for reaching out! I think you are right that transport running in a worker thread could be causing issues here. I'm adding this to our internal tracking. We'll take a look and keep you updated here on GitHub.
Thanks again for raising this.
Any updates on this? We are having the same issue hosting on Vercel.
Can we bump this up? I'm experiencing log dropping only for the BetterStack. I'm not sure if it shares the same root cause, but I can narrow down the root cause happening either in this library or the BetterStack server. Should I rewrite the pino transport myself as the author in order to make sure it flushes log at the end of promise?
Hi @roberttod , @Philitician and @ckblockit
Thanks for reporting this and for your patience. I tried my best to go through reported issues in Pino, the docs, and couple other transports to see whether we're doing anything incorrectly in our code. Since we're using the abstract transport library, and are registering the flush callback on close, I believe it should be solid.
I've tried to replicate the issue using serverless invoke local
:
const pino = require('pino');
const transport = pino.transport({
target: "@logtail/pino",
options: { sourceToken: 'MY_BETTERSTACK_TOKEN' }
});
const logger = pino(transport);
exports.hello = async (event) => {
logger.info("Hello from serverless")
return {
statusCode: 200,
body: JSON.stringify({
message: 'Go Serverless v4.0! Your function executed successfully!'
})
};
};
service: test-log
provider:
name: aws
runtime: nodejs20.x
functions:
hello:
handler: handler.hello
And I got 100% reliability of logs being sent to Better Stack.
Could you please point to me anything I might have missed? A reproducible code example would be the best. Maybe it's an issue with a specific dependency version, or a serverless setup.
I'll be happy to jump right back into it.
@ckblockit Are you having more Pino transports set up simultaneously where only Better Stack is experiencing dropped logs? If so, could you share which other transports are you using, so I can check for any significant differences?
Are the logs being dropped intermittent, or does it happen every time in a certain setup?
Hi @PetrHeinz Thanks for getting back to this issue.
We haven't seen any dropped l made some changes to our setup.
const betterStackTransportOptions: Partial<ILogtailOptions> = {
ignoreExceptions: false,
throwExceptions: true,
retryCount: 10,
sendLogsToConsoleOutput: true,
}
To answer your question we are using 2 transports
We also have a hook setup to send exceptions to Sentry.
// Set up a synchronous hook to send event to Sentry
hooks: {
logMethod(args: Parameters<LogFn>, method, level) {
try {
// Skip if there's no arguments passed. This shouldn't happen.
if (args.length === 0) {
return method.apply(this, args);
}
// Get the message if available.
const message = getMessage(args);
const extra = getExtra(args);
// Always send a breadcrumb for every logging statement.
Sentry.addBreadcrumb({
data: extra,
message,
});
// Send an error message to Sentry.
if (level >= 50) {
const err: Error = getError(args);
Sentry.captureException(err);
}
} catch (e) {
// Let's capture this in console, which will eventually bubble up to Sentry by its integration.
// eslint-disable-next-line no-console
console.error(e);
} finally {
// We will always go through to the next step regardless.
method.apply(this, args);
}
},
},
The way we verified log dropping was by checking against Sentry, stdout <> BetterStack. Log was available for former but not latter.
I will continue report here if we notice any dropping again.
I am using RedwoodJS on Netlify functions for our app, and so I need to use the pino logtail transport for logging.
I noticed recently that logs are going missing, especially when they are at the end of a chain of calls to a lambda function. I fixed this by a) implementing my own Writable stream which sends to Betterstack using @logtail/node - this is because I suspected that transports might be the problem as they run in a different thread b) awaiting logtail.flush() after any of my lambdas are complete.
Is there a way to resolve this without all the custom code? I tried using pino.flush() but I don't think it's actually calling logtail.flush(), and I think that's the problem (i.e. it might not be pino transports that are the issue). I see that logtail.flush is called in closeFunc in the transport, but I don't think that gets called on pino.flush().