Coggle / s3-streamlogger

Node.js stream for logging to s3 with rotated object names. Usable as a winston-file stream.
ISC License
80 stars 41 forks source link

No new logs being created #51

Open paul-uz opened 7 months ago

paul-uz commented 7 months ago

I was able to get an initial log file showing in S3, but now, I'm not seeing any more log files, or the original file being updated, despite seeing the output in the console (AWS cloudwatch logs)

autopulated commented 7 months ago

The file in S3 is updated based on the upload_every and buffer_size options (defaulting to 20 seconds an 10 KB) - so the file in S3 will be updated after buffered data is 20 seconds old, or there is 10KB of buffered data (whichever comes first).

If those options don't explain the behaviour you're seeing, then please share a small complete program that reproduces the problem.

FWIW If you're using cloudwatch logs, there isn't really much point in using this module - this was written before cloudwatch logs existed, and I'm only maintaining it for the convenience of existing users.

paul-uz commented 7 months ago

so the file in S3 will be updated after buffered data is 20 seconds old, or there is 10JB of buffered data (whichever comes first).

What does this mean exactly? What do i need to set these options to for the file to get updated everytime?

I understand what your syaing about Cloudwatch, but I'd like the option to create the log file as well, so this package is much appreciated!

paul-uz commented 7 months ago

I have tried setting these both really low, but still no new blogs.

autopulated commented 7 months ago

What do i need to set these options to for the file to get updated everytime?

Setting buffer_size: 0 would do it, but it is not recommended, since this will cause a large amount of traffic to S3.

Please can you provide a small example program that reproduces the problem?

paul-uz commented 7 months ago

Sadly I can only share my winston setup

const s3Transport = new winston.transports.Stream({
  stream: new S3StreamLogger({
    bucket: 'logs-bucket',
    buffer_size: 1,
    folder: 'foo',
    region: REGION,
    rotate_every: 1000,
    upload_every: 500,
    access_key_id: AWS_ACCESS_KEY_ID,
    secret_access_key: AWS_SECRET_ACCESS_KEY,
  }),
});

s3Transport.on('error', (err) => {
  console.error(err);
});

const logger = winston.createLogger({
  exitOnError: false,
  format: winston.format.json(),
  level: NODE_ENV === 'production' ? 'error' : 'debug',
  transports: [
    new winston.transports.Console(),
    s3Transport,
  ],
});
paul-uz commented 7 months ago

I tried setting buffer_size and upload_every to 0, and it made no difference

autopulated commented 7 months ago

rotate_every: 1000 means that a new file name will be used every second (all times in the options are specified in mliliseconds), so I'd expect your new logs to be being written to new files.

paul-uz commented 7 months ago

Yea I have tried everything and it's just stopped working.

Maybe related to the fact in using AWS Lambda?

But now I can't even get an initial log file made after deleting the one I had.

On Tue, 30 Jan 2024, 19:05 James Crosby, @.***> wrote:

rotate_every: 1000 means that a new file name will be used every second (all times in the options are specified in mliliseconds), so I'd expect your new logs to be being written to new files.

— Reply to this email directly, view it on GitHub https://github.com/Coggle/s3-streamlogger/issues/51#issuecomment-1917705453, or unsubscribe https://github.com/notifications/unsubscribe-auth/ARTIDVXR4SAMXCYXQKES6KTYRE757AVCNFSM6AAAAABCREK7LCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJXG4YDKNBVGM . You are receiving this because you authored the thread.Message ID: @.***>

autopulated commented 7 months ago

Oh, if you are using AWS lambda then you need to call flushFile (call it with a callback, and wait for the callback) before finishing your lambda function.

s3-streamlogger is especially not suitable for use in lambda though, as this will add a substantial delay to your lambda. Cloudwatch logs work much better

paul-uz commented 7 months ago

Do you have an example I can use?

On Tue, 30 Jan 2024, 20:01 James Crosby, @.***> wrote:

Oh, if you are using AWS lambda then you need to call flushFile (call it with a callback, and wait for the callback) before finishing your lambda function.

s3-streamlogger is especially not suitable for use in lambda though, as this will add a substantial delay to your lambda. Cloudwatch logs work much better

— Reply to this email directly, view it on GitHub https://github.com/Coggle/s3-streamlogger/issues/51#issuecomment-1917793252, or unsubscribe https://github.com/notifications/unsubscribe-auth/ARTIDVUWCMCYQUQBHP7L5STYRFGQRAVCNFSM6AAAAABCREK7LCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJXG44TGMRVGI . You are receiving this because you authored the thread.Message ID: @.***>

autopulated commented 7 months ago
const s3Stream = new S3StreamLogger({
    // ... options, auth should not be specified here, it should come from the lambda function role
  });
const s3Transport = new winston.transports.Stream({
  stream: s3Stream
});

// ...

doMyLambdaWork((err1) => {
  s3Stream.flushFile((err2) => {
     callLambdaDoneHandlerHere(err1 || err2);
  })
})

to reiterate s3-streamlogger is especially not suitable for use in lambda, as this will add a substantial delay to your lambda, which might incur a significant increase in cost. Cloudwatch logs work much better