Closed thevirus20 closed 3 years ago
This is not recommended, if you really need it, you could use something like
logger.Writer = &log.ConsoleWriter{
Formatter: <a_nginx_printf_function>,
Writer: &log.FileWriter{...},
}
WARNING: it will bring a performance downgrade, in the way it will as slow as zap, but faster than logrus/zerolog. This is why i not mention in readme
, see https://github.com/phuslu/log/issues/18 for more details.
Write json to console is easy, like
logger.Writer = &log.IOWriter{os.Stderr}
You could write you own version 'MultiWriter' (just copy and modify multi.go)
log.FileWriter is a drop-replacement to lumberjack
, see https://github.com/phuslu/log#rotating-file-writer
Please let me repeat, this lib is the fastest json logger, if you use it as text logger with file writer, the performance will slow down as zap/logrus/zerolog
It will be not acceptable for low latency app, e.g. RTB
So for point 4, you are using cron
and Cleaner
will be good thing in rotating the file?
And Yes, I did a few changes in your lib.
File name is always prefixed with date format I am not sure if that is used for rotating or something but I added the following case to omit prefix and keep the filename as it is passed.
//file.go line 243
case "-": //So when TimeFormat field is set to "-", filename will be intact.
filename = prefix
Another change i did to have a log level for File in MultiWriter as well like its already has ConsoleLevel.
//multi.go
//In the MultiWriter struct
InfoWriterLevel Level
//Change in the func WriteEntry() default: if w.InfoWriter != nil && e.Level >= w.InfoWriterLevel { n, err1 = w.InfoWriter.WriteEntry(e) if err1 != nil && err == nil { err = err1 } }
I will try to use another flag to lock the LogLevel in `MultiWriter`, hoping having flag wont cause any performance lag.
Another point I need to mention is locally I was able to write in both console and file but on server I was not able to write in file, console worked. Here is config.
```go
level := getSugaredPhusluLevel(config.FileLevel)
writer := &log.FileWriter{
Filename: config.FileLocation,
MaxSize: 500 * 1024 * 1024,
//MaxBackups: 0,
FileMode: 0664,
TimeFormat: "-",
LocalTime: false,
HostName: false,
ProcessID: false,
EnsureFolder: true,
Cleaner: nil,
}
logger = &log.Logger{
Level: level,
Caller: 0,
TimeField: "",
TimeFormat: "2006-01-02T15:04:05.000Z0700",
Writer: writer,
}
I had something like this on the server after trying the file wasn't accessible.
Another thing is the latency goes high that took several seconds to complete.
Didn't check the memory usage, but is it because it was console and I used Msgf()
to log like Printf()
in debug mode.
if you're using "2006-01-02T15:04:05.000Z0700" as logger timeformat, please leave it to empty string, then this lib use a much faster function to generate this format stamp.
this lib codes is really simple & clean and no depends lib, so i recommend you could do a quick code review, most question could be answered.
For example, because this lib is standalone lib, you could easily download and bundle to repo as a sub pkg. Do some modification, remove unneeded files, change it to a text logger. (this lib always foucs to json logger and no motivation to change)
I was not able to write in JSON file on the server, by looking the config do you see any mistake I did may be with file permission or something. After I am able to write to JSON file log I will check how this performs in critical case. Can you help me with the FileWriter
as per the config I posted?
please disable TimeFormat: "-"
related logic then try again.
And if you looking for a high performance text logger
, this lib is not meet you requirement.
please disable
TimeFormat: "-"
related logic then try again.
Yes it worked but
{"time":"2020-12-11T17:12:51.696Z","level":"debug","message":"Total Time taken 156067 ms"}
latency is still high.
Does this high latency has anything related to using Msgf(format, args...)
in my logger wrapper?
Also as you mentioned about using only JSON formatter for better performance and you are using it yourself for RTB so how you process this JSON logs for analysis, can you suggest?
Msgf
should not be used in hot path,for your case .TimeDiff
is a good choice, and console output also should disabled for hot path.
For online debug, I use jq
For OLAP, I use clickhouse
If avoid Msgf
usage, the latency still high. I will doubt that the root cause is not related logging lib
Does number of argument after format can cause such delay in Msgf()
mostly have 3 parameter unique id/name and message but some log contains few flags values and more arguments.
Also I didn't understand why you mentioned TimeDiff
the time difference I have given as out is calculated by
//at start
start := time.Now().UnixNano() / (int64(time.Millisecond) / int64(time.Nanosecond))
//end
end := time.Now().UnixNano() / (int64(time.Millisecond) / int64(time.Nanosecond))
Log.Trace("%s %s Total Time in taken %d ms", "<id>", <name>, end-start)
//I have logged all the logging statement like above throughout my go app.
May be this isn't the correct way to find the latency time and everything else is working fine just calculating difference is taking up all the time?
//at start
start := time.Now()
//end
file_logger.Debug().Str("id", "<id>").Str("name", "<name>").TimeDiff("total_time", time.Now(), start).Msg("")
But the performance difference is minor, so I suppose the performance issue
is not related logging lib.
You should add more TimeDiff
logs to see what the root cause is.
And .Str("id", "<id>").Str("name", "<name>")
coud pre-format with Context
which could save cpu (then serve more QPS)
Besides, if you're coding RTB
like app, I strongly recommend
fasthttp + fastjson + quicktemplate + phuslog + fastcache
stacks.
other libs is far from the goals, really. (dont waste you time)
//at start start := time.Now() //end file_logger.Debug().Str("id", "<id>").Str("name", "<name>").TimeDiff("total_time", time.Now(), start).Msg("")
So by TimeDiff you mean th func in the lib thanks for suggestion, I also check another way is to use
time.Since
.
Also I used zap Structured
logging along with logrus
there is issue as well now I need to change the app to opt for structured phuslu logging for all log statement.
Will be adding more TimeDiff
fields as you mentioned to find out what happens but when not logging anything there is no latency or when only logging one line per request doesn't impact performance, everything comes to halt when using debug mode.
Besides, if you're coding
RTB
like app, I strongly recommendfasthttp + fastjson + quicktemplate + phuslog + fastcache
stacks.other libs is far from the goals, really. (dont waste you time)
Thanks, I used fastjson/bigcache and started to implement phuslu.
I am surely going to use all the stack you mentioned for my app.
fasthttp/fastjson/quicktemplate/fastcache is written by valyala for programmatic advertising. zap/logrus are not, there's no golang logging lib for the purpose until this one. I made it to be a dependency-free, plain, easy to modify logging lib. so everyone could bundle and customize it.
I tried to just log the time.Since()
with info mode as only log message to print is Total time taken.
Everything looks ok but same message with debug mode with all other trace messages look like this.
It goes over 2000ms sometime to process request.
I think it cost by file IO.
You could use a dummy writer &log.IOWriter{ioutil.Discard}
to verify this assumption.
Didn't get it. If I redirect log output to ioutil.Discard
I have no other way to check latency on live request but can only be able to check in log file in its writing there. Console log has the same issue as well as I mentioned in earlier comments.
Can you elaborate what you mean by your statement above?
assign different log.Writer to debug logger and info log.
1st round: both FileWriter to debug & info, test and verify the latency
2st round: assign ioutil.Discard to debug, test and verify the latency
if 2st round back to normal, we could confirm that extra latency comes from File IO
Sorry, but I have no other way to check latency without logging so If I redirect the message to ioutil.Discard
I won't be able to know what latency app is having.
If you think issue is with File IO what could be the alternative then?
I am also thinking to rewrite all the logs message in structured format as you mentioned before, may be that would solve the issue.
indeed, sturctured logging only ~20% faster than optimized printf logging. For almost go app, json logger or printf logging is not the bottleneck, but for RTB, it is.
What's the solution? there's no solution until this lib, I also provide another logger named TSVLogger
I believe it is always the fastest logging solution in go, please see https://github.com/phuslu/log/blob/master/tsv_test.go for details.
As usual, I use TSVLogger
for almost logs, e.g. traffic logs, I only use json Logger
for some quickly changing logs, e.g. user impression log.
tsv logs is hard for human reading, but it's friendly with some OLAP solution, e.g. clickhouse
I have trying to implement logger in my app, have few things to be cleared as those are not present in
readme
or didn't get it how to do. Those are:log.FileWriter
or like its there inConsoleWriter
basically formating inFileWriter
ConsoleWriter.Formatter
log.MultiWriter
that It can write level to different file, can we have custom/default log level and lock file to level only in that file or console.lumberjack
also but canlumberjack
be used to rotate log file inlog.FileWriter
. Not sure about performance with cron for rotating logs.