uber-go / zap

Blazing fast, structured, leveled logging in Go.
https://pkg.go.dev/go.uber.org/zap
MIT License
21.9k stars 1.43k forks source link

Log Records getting skipped after rotation #1448

Open Akhilesh53 opened 4 months ago

Akhilesh53 commented 4 months ago

Describe the bug After rotation (creation of a new file) the very first log line is skipped and is not showing in log file.

To Reproduce `func initiliaselogger() { level := getLevel(env.ENV.LOG_LEVEL)

devConfig := zap.NewDevelopmentEncoderConfig()
prodConfig := zap.NewProductionEncoderConfig()

devConfig.EncodeTime = zapcore.RFC3339NanoTimeEncoder
prodConfig.EncodeTime = zapcore.RFC3339NanoTimeEncoder
hostname, err := os.Hostname()
if err != nil {
    hostname = ""
}
if hostname != "" {
    hostname = "_" + hostname
}
filewriter := zapcore.AddSync(&lumberjack.Logger{
    Filename:   "logs/regulatory/" + strings.ToLower(strings.ReplaceAll(env.ENV.PROCESS_NAME+hostname, " ", "_")) + ".log",
    MaxSize:    40,
    MaxAge:     30,
    MaxBackups: 100,
    Compress:   false, // disabled by default
})
core := zapcore.NewCore(zapcore.NewJSONEncoder(prodConfig), filewriter, level)

if env.ENV.ENVIRONMENT == "dev" {
    core = zapcore.NewTee(
        core,
        zapcore.NewCore(zapcore.NewConsoleEncoder(devConfig), zapcore.Lock(os.Stdout), level),
    )
}

if env.ENV.KAFKA_LOG == "Y" {
    kafkaSync := zapcore.AddSync(getKafkaWriter())
    core = zapcore.NewTee(
        core,
        zapcore.NewCore(zapcore.NewJSONEncoder(prodConfig), kafkaSync, level),
    )
}

Log = zap.New(core,
    zap.AddCaller(),
    zap.AddCallerSkip(1),
    zap.AddStacktrace(zap.ErrorLevel),
)

}`

Expected behavior After the creation of a new file (after rotation), the very line of the log is skipped.

Pls let me know if I have to change anything in the configuration while initializing the logger.

r-hang commented 4 months ago

Hey @Akhilesh53,

Would you be able to provide a reproducible example that we can run and debug locally?

Akhilesh53 commented 2 months ago

Hi @r-hang
I am using this logger configuration. I cannot share the log file that has production data. But what is happening is when a log is made of threshold size, and a new log file is created. Within this interval, a few logs are missed.

Means: Some parts of the same request are present in the previous file (completed log file) and some parts are in the newly created log file. But logs within this are missed.

maxLogFileSize = 40 maxLogFileAge = 30 maxLogFiles = 100