Pod-Point / laravel-monolog-kinesis

Simply forward Laravel application logs to an AWS Kinesis stream
https://pod-point.com
MIT License
2 stars 0 forks source link

Are the logs sent asynchronously #13

Closed gauravmak closed 2 years ago

gauravmak commented 2 years ago

Hi,

Thanks for making this.

Can this slow down the app? And what if the AWS is down for a bit?

clemblanco commented 2 years ago

Hey @gauravmak,

Good question. I don't think Monolog is logging things asynchronously by default. You'd need to queue Jobs and have workers to process these, etc... in order to do this asynchronously.

Logging doesn't have a serious influence on performances, especially when you are using something like Kineses which is designed to receive high volumes of such transactions. Monolog handlers are like instances of objects for data handling and storing behaviours.

This generally is using pretty standard Laravel/Monolog stuff.

At the moment we fail silently if for whatever reason AWS is not available but maybe we could look at offering some other approach... What would you recommend?

We usually always use this on top of the standard daily Laravel logging channel so we would still have the log message on the daily files themselves. If we were not failing silently, you would start seeing errors coming to you to Sentry/Rollbar/etc... for example.

Could maybe add something like:

<?php

return [

    // ...

    'channels' => [

        'some_channel' => [
            'driver' => 'kinesis',
            // ...
            'fail_silently' => false,
        ],

    ],

];

See what Monolog ElasticsearchHandler does for example.

gauravmak commented 2 years ago

Three options that I can think of:

1) Use Monolog's BufferHandler in combination with fastcgi_finish_request as per this comment (Only for nginx + PHP FPM)

2) Use sysout in combination with CloudWatch agent

3) PR to Laravel for logging stuff using Monolog inside terminable middleware of something so that we do not need to worry anything about it at all 😄

Closing this issue, for now, thanks.