Closed jba closed 1 year ago
@ChrisHines
It seems to me that projects that want to pass loggers in contexts can easily write their own LoggerFromContext and NewLoggerContext functions without loss of functionality. That, of course, means there isn't a common contextKey for loggers shared across the Go ecosystem, but why do we need that anyway?
Two scenarios I wonder about are lots of bouncing between passing loggers directly vs by context, and proliferation of logger-like values on a context, when crossing packages/APIs of disparate provenance.
I worry about this to the degree that https://github.com/golang/go/issues/56345#issuecomment-1367739614 resonates as an argument that there are use cases for slog
where a Logger
must be on a context - if it's inevitable, I think it helps to anticipate it.
About the shared contextKey, and passing the logger by context, is it expected that 3rd party libraries, or stdlib, will use the logger from context too?
What if I pass context to (sql.DB).ExecContext, intending context for cancellation, and now my logs will contain user passwords, or any other sensitive data?
Because I updated sql driver, and it started to support slog.
The library can think, that I passed the logger for it on purpose. Otherwise, it wouldn't be there, right?
How is even possible to remove the logger from context?
Overwriting by nil
will fall back to slog.Default()
, so I need to set it to nil
too?
Or are we waiting for context merge proposal to solve it?
Otherwise, it looks like my first line of func main()
will contain slog.SetDefault(nil)
since the slog reaches std lib.
Actually not, because it could panic. So I will need to implement and set noop logger on my own.
With global log.Logger
it's easy, most libraries are polite, and don't use it.
Here it's quite hard to detect from the point of the library.
But, if the libraries aren't meant to use slog from context, why do we need the shared contextKey then?
The libraries weren't trying to get logger form context for now, because of the ecosystem fragmentation. That will IMO change though, when it will be part of std lib.
What if I pass context to (sql.DB).ExecContext, intending context for cancellation, and now my logs will contain user passwords, or any other sensitive data?
Why would they?
What if I pass context to (sql.DB).ExecContext, intending context for cancellation, and now my logs will contain user passwords, or any other sensitive data?
Why would they?
It's in next sentence: Because I updated sql driver, and it started to support slog.
The point is, that I pass context with the logger down to the 3rd party code then.
Until now, the key was unknown, and there is no way how to iterate all the context values.
So if the 3rd party library wanted to log, it explicitly asked for it.
But now, with slog
, the key is well-known, and we can expect that some libraries will try to obtain the logger from the context. Is it expected, or is it antipattern?
If it is an antipattern, why there is the need for well-know contextKey?
Until now, the key was unknown, and there is no way how to iterate all the context values.
I still don't get it: how would adding a well-known logger to the context help that 3rd party code iterate other context values?
Besides, neither the Logger
nor the Handler
interface expose any means to access the handler's Attr
s; even if you did go and add sensitive Attr
s to logs yourself, that would not give the 3rd party users of the logger access to those.
There is clearly a lot of controversy around the idea of adding a logger to a context. That feature is independent of the rest of the proposal, and easy for anyone to implement outside of the standard library. So we've decided to remove the NewContext
, FromContext
and Ctx
functions. We'll put them in another proposal if this one is accepted. CLs to update the design doc and the golang.org/x/exp/slog package are in flight.
Is there a particular implementation decision to have WithAttrs([]Attr) and not WithAttrs(...Attr)? It seems like that would be more consistent.
@flowchartsman, Handler.WithAttrs
is not intended to be called by user code. (Indeed, no Handler method is.) The Logger's job is to convert the arguments of Logger.With
to a slice and pass it to Handler.WithAttrs
.
This proposal has not mentioned how to handle slices.
@QuantumGhost, it's there implicitly. The doc for TextHandler.Handle
says:
If a value implements encoding.TextMarshaler, the result of MarshalText is written. Otherwise, the result of fmt.Sprint is written.
So a slice of ints would print out like [1 2 3]
. The JSONHandler
will use encoding/json.Marshal
to output a JSON array.
Here is a small bit of feedback on a specific detail.
The level numbers below don't really matter too much. Any system can map them to another numbering scheme if it wishes. We picked them to satisfy three constraints.
Second, we wanted to make it easy to work with verbosities instead of levels. As discussed above, some logging packages like glog and Logr use verbosities instead, where a verbosity of 0 corresponds to the Info level and higher values represent less important messages. Negating a verbosity converts it into a Level. To use a verbosity of v with this design, pass -v to Log or LogAttrs.
const (
LevelDebug Level = -4
LevelInfo Level = 0
LevelWarn Level = 4
LevelError Level = 8
)
I've often been baffled by logging packages that put the level values in this order. In my mind levels and verbosity are equivalent. LevelDebug is more verbose than LevelInfo.
If the numbering scheme doesn't matter then it seems simpler to me to satisfy the interoperability with glog
and Logr
style verbosities by renumbering slog's levels as follows:
const (
LevelError Level = -8
LevelWarn Level = -4
LevelInfo Level = 0
LevelDebug Level = 4
)
That would remove the negation required to convert numeric verbosities to an slog.Level
, which seems like a small but nice simplification to code and docs.
Moving the context API to a future proposal is a welcome solution. Thanks for the change @jba and the feedback @ChrisHines!
I have another suggestion regarding the remaining context API Logger.WithContext
. It would be nice to have access to the context in the handler to let Handler.Enabled
respond differently based on the context (eg. to force logging all lines associated with a given request.)
This could be done by either passing the context to Handler.Enabled
, or by storing the context on the handler by adding a Handler.WithContext
. That latter approach would also make Logger.WithContext
more similar to Logger.With
and Logger.WithGroup
which do get passed to the handler. I don't know what the performance impact of storing the Context on the Handler would be.
The current design decision to have lower values represent more verbosity in the log levels does not align with the common understanding that higher values should indicate more verbosity. This has also been proven to be a successful concept in low-level system logs, such as the 'log/syslog' implementation. Therefore, I propose reversing the order of the log levels, as shown in the following example:
const (
// Severity.
// From /usr/include/sys/syslog.h.
// These are the same on Linux, BSD, and OS X.
LOG_EMERG Priority = iota // 0
LOG_ALERT // 1
LOG_CRIT // 2
LOG_ERR // 3
LOG_WARNING // 4
LOG_NOTICE // 5
LOG_INFO // 6
LOG_DEBUG // 7
)
In addition, I believe that log levels should be interpreted consistently across all architectures. The maximum value for a Go "int" value can vary depending on the architecture and operating system, which can cause discrepancies in log level interpretation. To ensure optimal long-term compatibility, I propose using a more concrete type, such as 'uint16' or 'uint32', for the log level.
Furthermore, I believe that the current constraint of only three custom level spaces between predefined values is not sufficient, given that the goal is to provide room for schemes with named levels between the predefined levels. Therefore, I propose using an 'int8' Level with the following constants:
const (
LevelQuiet Level = -128 // never log anything
LevelError Level = -86
LevelWarn Level = -44
LevelNotice Level = 0 // default when level is
LevelInfo Level = 42
LevelDebug Level = 84
LevelAlways Level = 127 // log always regardless of the level
)
An implementation of this approach can be found at the following link: https://go-review.googlesource.com/c/exp/+/462049."
@jellevandenhooff Handler already gets access to the context. It gets passed along with each record precisely for this purpose.
@antichris My suggestion is to let Handler.Enabled have access to the context. That happens before the Record you mention is created (see https://github.com/golang/exp/blob/1de6713980dea447778ef6e71194d5eb54288072/slog/pc.go#L20). My goal is to let a handler change the verbosity level based on the context, to for example log all lines for special debug requests. A handler could always return true from Enabled and filter the records in Handle, but it is more efficient to do so in Enabled since returning false there short-circuits the remaining log processing.
It would be nice to have access to the context in the handler to let Handler.Enabled respond differently based on the context (eg. to force logging all lines associated with a given request.)
@jellevandenhooff, this is a tempting idea. We wouldn't want Handler.WithContext since that might cause a new Handler to be allocated, and we want
logger.WithContext(ctx).Info(...)
to be alloc-free. But we could pass the context to Enabled:
type Handler interface {
Enabled(context.Context, Level) bool
...
}
I have some reservations, however:
So the question in my mind is, are there use cases that outweigh those concerns?
@ChrisHines, @mkungla: OpenTelemetry has become a de facto standard, so we chose to match the direction and spacing of their level-numbering scheme.
Other systems that treat higher numbers as more severe:
@ChrisHines, @mkungla: OpenTelemetry has become a de facto standard, so we chose to match the direction and spacing of their level-numbering scheme.
Other systems that treat higher numbers as more severe:
@jba, I understand that the goal is to align with the industry standard set by OpenTelemetry. However, upon reviewing the current definition of the log levels, I do not see a direct match to any standard.
const (
LevelDebug Level = -4
LevelInfo Level = 0
LevelWarn Level = 4
LevelError Level = 8
)
I understand that my feedback may not align with the final decision, but my goal is to bring attention to any potential issues that may arise from the current design. As I have done extensive testing with different use cases, I believe that the current 4 value space creates some constraints for unique use cases, and second that the level type could be much more strict, such as int8 or int16. Therefore, I would like to suggest that the final design decisions are well thought through and consider the potential constraints that may arise from the chosen scheme. I appreciate the time and effort that you and others have put into bringing structured logging to the Go standard library, and I hope that my feedback will be taken into consideration.
@ChrisHines, @mkungla: OpenTelemetry has become a de facto standard, so we chose to match the direction and spacing of their level-numbering scheme.
Fair enough. I had forgotten that part of the design while focusing on the negation of verbosities. The OpenTelemetry ordering is counter intuitive to me, but there is nothing fundamentally wrong with it.
OpenTelemetry has become a de facto standard
This is a claim of OpenTelemetry which is unsubstantiated by actual usage data or experience reports. It is not in general true.
It is not in general true.
You sound sure that all of them are just dumping contributions, not using any of it themselves. How come?
I just tried using this package for real, and have some more feedback.
The very first thing I wanted to do was make one of the standard Handlers add a static prefix to each log entry.
In my specific case, I wanted to add a CEE cookie at the start of each entry output by JSONHandler
, for transporting JSON logs over syslog; but there are obviously use cases for prefixing TextHandler
output too.
I hoped slog.HandlerOptions
might have a prefix option, as that's pretty common in logging packages, including the existing Go stdlib log
package.
// This should not be hard to do with slog
log.SetPrefix(os.Args[0] + ": ")
log.Printf("hello sailor")
Currently, implementing the same functionality in slog
without changing the calling API seems harder than it should be.
My suggestion would be to add a Prefix
string field to HandlerOptions
and make both the standard handlers use it, to replicate what we have in the existing log
package.
More generally, having the handler constructors be methods on HandlerOptions
means every variant JSON or text handler will need to have its own corresponding options struct, which seems inelegant. I'm not sure what a handler options interface
would look like though, and obviously the options would then have to be an argument to the handler constructor, rather than the constructor being a method on the options.
@lpar
The very first thing I wanted to do was make one of the standard Handlers add a static prefix to each log entry.
Structured logging defines log events as a set of key=val pairs, and log entries as a specific set of those pairs. There is no concept of a prefix in the sense that you mean here.
@jba I also tried out this package and one practical problem is that Level nor LevelVar implement the flag.Var interface, so it's more difficult than it should be to set the log level with the flag.Var function. Could this be adjusted?
Another practical inconvenience is the signature of the Logger.Error function. Since we have to pass both a message and an error, this leads to having to call errors.New in cases where you don't have an err yet, where the error and message often boil down to the same string. Perhaps two different error logging functions, one for string messages and one for error would be more convenient. Zerolog also had Err and Error for these two cases.
Since we have to pass both a message and an error, this leads to having to call errors.New in cases where you don't have an err yet
This is not correct. You can use an untyped nil OR an nil error
value and the field will be omitted https://go.dev/play/p/-GV1iyWzv33?v=gotip This is documented on the Logger
type.
@lpar You could use a Writer
that adds a prefix; 3rd-party packages already exist for that. It's not even that difficult to slap such a thing together on your own — here, riffing off an earlier doodle of mine: https://go.dev/play/p/lr7ke9vcrlM
I do agree, though, having this functionality built into the standard Handler
s would be more convenient in those special cases. The question is, whether they are common enough to warrant inclusion in the library.
@flowchartsman Your example demonstrates what I want to say. Now we always have to pass a string message, even if we already have an error. That is sometimes redundant because the error.Error could be used as the message in stead.
@lpar
The very first thing I wanted to do was make one of the standard Handlers add a static prefix to each log entry.
Structured logging defines log events as a set of key=val pairs, and log entries as a specific set of those pairs. There is no concept of a prefix in the sense that you mean here.
Maybe you should have followed the link I posted. Here's another one. Per CEE draft standards, a static prefix is added to JSON that's sent to syslog, so that log processors can reliably tell whether a given syslog entry is supposed to be JSON or not.
While MITRE's project was defunded, there's CEE support in rsyslog, syslog-ng, LogStash, NXLog, and so on. It's a perfectly normal and reasonable thing to want to do.
@jba I also tried out this package and one practical problem is that Level nor LevelVar implement the flag.Var interface, so it's more difficult than it should be to set the log level with the flag.Var function. Could this be adjusted?
slog.Level
currently has MarshalJSON
but no corresponding unmarshaler. IMO, it should instead have both encoding.TextMarshaler
and encoding.TextUnmarshaler
which would allow it to roundtrip through different encoding formats as well as be used with flags using flag.TextVar
In my specific case, I wanted to add a CEE cookie at the start of each entry output by JSONHandler, for transporting JSON logs over syslog; but there are obviously use cases for prefixing TextHandler output too.
Given that the usecase for CEE is to smuggle structured data through syslog, it is probably better for a syslog package to provide such a handler, rather than overloading the json and text handlers which have their own definitions of valid structured data.
There is clearly a lot of controversy around the idea of adding a logger to a context. That feature is independent of the rest of the proposal, and easy for anyone to implement outside of the standard library. So we've decided to remove the
NewContext
,FromContext
andCtx
functions. We'll put them in another proposal if this one is accepted. CLs to update the design doc and the golang.org/x/exp/slog package are in flight.
@jba I have just started implementation of a slog Handler for OpenTelemetry when I discovered this. It is unfortunate that these are removed since I was planning to use these.
easy for anyone to implement outside of the standard library
This is true, but without this being part of standard library I think most people won't bother and we will end up with context-less logging statements everywhere, which no longer seem to be possible to inject with OpenTelemetry's context.
Let me clarify what the problem is. In OpenTelemetry world it is very common to have a code like this:
func MyHTTPServerHandler(ctx context.Context, ...) {
var span trace.Span
ctx, span = tracer.Start(ctx, "my http operation") // This puts trace/span ids in the Context
defer span.End()
subOperation(ctx)
}
func subOperation(ctx context.Context) {
// Call some 3rd party library to do useful work and pass the context.
// This can be also buried deep down in the call stack.
specialLibraryFunc(ctx)
}
// This is in some 3rd party library
func specialLibraryFunc(ctx context.Context) {
// This func knows nothing about OpenTelemetry, but as a good Go citizen uses slog for logging
// But how do we obtain a logger? There is no longer slog.FromContext() func that we can use
// so we end up using the default logger. Logs are now context-less:
slog.Info("foo")
// Or alternatively we have our own long-lived slog.Logger and use it:
myLibraryLogger.Info("bar")
// Either way, no context is passed to the Handler, so even if the Handler is OpenTelemetry-aware
// it has no way of extracting the OpenTelemetry's execution context from Context so that it can
// for example add the trace/span ids to the output.
}
It seems like the only way this could work with the current slog API is if every function that intends to do logging accepts slog.Logger as a parameter (essentially resulting in almost every non-trivial function having context.Context and slog.Logger as mandatory parameters). One should never store a long-lived Logger and the global Info/Warn/etc functions that use the default Logger should not be used except in some top-level code in main. Is this how you see the idiomatic usage of slog package?
I may be missing something and there is indeed a way to make this work with the current slog API, perhaps you can help me understand how.
UPDATE: one more way this could work is if everybody used Logger.WithContext() everywhere, e.g. specialLibraryFunc above becomes this:
func specialLibraryFunc(ctx context.Context) {
myLibraryLogger.WithContext(ctx).Info("foo")
}
This perhaps can work if this is advertised and encouraged as the idiomatic way to use slog. So, if you have a Context, make sure to use it for logging. My worry with this approach is that it is at best a recommendation and I am not sure how often people just won't bother to do this since it is so much simpler to just call slog.Info()
.
The way I've seen it, slog.Logger
corresponds more to trace.Tracer
rather than trace.SpanFromContext
, even though a call to logger.Info()
might map more to span.AddEvent()
.
The functions I've seen that will want to log, usually log more than once, and commonly have a fixed set of attributes to be shared in those calls anyway.
func specialLibraryFunc(ctx context.Context, arg string) {
log := myLibraryLogger.WithContext(ctx).With("op", "x", "arg", arg)
log.Debug("...", "fizz", "buzz")
log.Info("...", "foo", "bar")
}
I think it ties in with an earlier request to break out attributes https://github.com/golang/go/discussions/54763#discussioncomment-3775797 so you could setup shared attributes once, and reuse them between loggers and spans
func specialLibraryFunc(ctx context.Context, arg string) {
as := []attrs.KV{{"op", "x"},{"arg", arg}}
ctx, span := myLibraryTracer.Start(ctx, "special")
defer span.End()
span.SetAttributes(attrs...)
log := myLibraryLogger.WithContext(ctx).With(attrs...)
log.Debug("...", "fizz", "buzz")
log.Info("...", "foo", "bar")
}
one should never store a long-lived Logger and the global Info/Warn/etc functions that use the default Logger should not be used except in some top-level code in main.
Which brings back a point I made earlier of isolating the global/default loggers https://github.com/golang/go/discussions/54763#discussioncomment-3508476 , making it clear they are a special case rather than a sustainable way of working.
It would also resolve the issue of defaultHandler having a different format than any of the exposed handlers
Just to be clear, I think the ideal approach from OpenTelemetry's perspective would be to add context.Context as a required parameter to Info/Warn/etc functions (both top-level ones that work on the default Logger and the Logger methods).
I know this was considered and rejected in the past, in favour of supporting NewContext, FromContext and Ctx functions. With those now gone, perhaps you can reconsider adding context.Context as a parameter.
Just to be clear, I think the ideal approach from OpenTelemetry's perspective would be to add context.Context as a required parameter to Info/Warn/etc functions (both top-level ones that work on the default Logger and the Logger methods).
I know this was considered and rejected in the past, in favour of supporting NewContext, FromContext and Ctx functions. With those now gone, perhaps you can reconsider adding context.Context as a parameter.
For that use case there's the method Logger.WithContext
(https://pkg.go.dev/golang.org/x/exp/slog#Logger.WithContext) that passes a context through to the handler. That method is still there. The handler can then extract a trace ID and include it in the output. This looks like logger.WithContext(ctx).Info("got a request")
. Including a context argument with every log method was considered but decided against by @jba and others because they did not want to make the context mandatory.
For that use case there's the method
Logger.WithContext
(https://pkg.go.dev/golang.org/x/exp/slog#Logger.WithContext) that passes a context through to the handler. That method is still there.
The problem is that using this method is completely optional. I think the result will be that most code out there won't use it. If I am an independent library developer and want to use slog, why would I bother to pepper my code with WithContext
everywhere? What's in it for me?
My concern is that the currents slog design does not define the idiomatic usage in a way that will result in context-preserving logging. It is too easy (in some ways even encouraged) to do the simple and probably the wrong thing (just call slog.Info() and why not, since it is easy).
So, most third party code will end up with logging that fails to recognize the current execution context. In case of OpenTelemetry this greatly diminishes the value of emitted telemetry. One of OpenTelemetry's goals is to emit correlated telemetry where logs explicitly include traceid/spanid of the request they were executed in the context of.
This works great in languages where the context is passed implicitly in thread local storage, e.g. for Java OpenTelemetry has an Appender (equivalent to slog.Handler) implementation where anything your application or your application's dependencies (even the ones created well before OpenTelemetry was incepted) log using one of the standard logging libraries (e.g. Log4J) automatically is associated with the traceid/spanid in the context of which the logging happened. This is great and highly valuable.
With the current design of slog most likely we won't have this automatic and seamless trace/span id injection into logs. The only way to perform this injection would be if the code is deliberately and carefully written to use WithContext
when logging. This is fine for greenfield developments where you write all of your code knowingly to use slog and OpenTelemetry correctly together. For a lot of other code I suspect this won't be the case simply because the author of the code will have no idea it matters.
Just to be clear, I think the ideal approach from OpenTelemetry's perspective would be to add context.Context as a required parameter to Info/Warn/etc functions (both top-level ones that work on the default Logger and the Logger methods).
I know this was considered and rejected in the past, in favour of supporting NewContext, FromContext and Ctx functions. With those now gone, perhaps you can reconsider adding context.Context as a parameter.
It will be really unfortunate if when users initially start to use slog
it won't be compatible with things like OpenTelemetry or other logging exporters that are context aware. It will require these users to go back through their implementations again to add all the additional logging library context handlers so those libraries are supported.
The problem is that using this method is completely optional. I think the result will be that most code out there won't use it. ...
My concern is that the currents slog design does not define the idiomatic usage in a way that will result in context-preserving logging. It is too easy (in some ways even encouraged) to do the simple and probably the wrong thing (just call slog.Info() and why not, since it is easy).
Agree. Internally, we use our logger wrapper with reduced interface. It hides the *f/Sprintf methods and reduces log levels.
But to the point, it contains also WithContext(ctx context.Context) Logger
method. And I can say, it's very simple to forget to add it to the chain. And you find yourself with pants down when you need the logs most.
I'm really planing to make the context mandatory.
BTW I also love the slog
's idea of mandatory error
for Error log level.
There is clearly a lot of controversy around the idea of adding a logger to a context. That feature is independent of the rest of the proposal, and easy for anyone to implement outside of the standard library. So we've decided to remove the
NewContext
,FromContext
andCtx
functions. We'll put them in another proposal if this one is accepted. CLs to update the design doc and the golang.org/x/exp/slog package are in flight.
While the mechanics of implementing NewContext()
and FromContext()
are simple, I have implemented these for loggers in a number of systems at this point and it takes mere minutes, implementations of these functions will not be interoperable. Having a common implementation allows for library and application authors to use the interface without worrying about whether other users will be able to interact with the same component. It also avoids having multiple different implementations store multiple copies of the same (or worse, different) loggers in the context under different keys.
As for the argument that providing these functions will lead to a slippery slope of everyone throwing functional objects into the context instead of scalar values just because the stdlib has done it, that ship has already sailed. I can't remember ever hearing anyone justify putting functional objects into a context by pointing at httptrace
.
For that use case there's the method
Logger.WithContext
(https://pkg.go.dev/golang.org/x/exp/slog#Logger.WithContext) that passes a context through to the handler. That method is still there.The problem is that using this method is completely optional. I think the result will be that most code out there won't use it. If I am an independent library developer and want to use slog, why would I bother to pepper my code with
WithContext
everywhere? What's in it for me?
On top of its use being optional, it also doesn't provide a mechanism for associating a given Handler
and attributes set earlier in the call path. I don't see how this is a meaningful improvement over the current situation where libraries either use the global default logger (which is often undesired by the application author/operator) or avoid logging altogether. We have an opportunity here to meaningfully improve the control that application authors and operators have over the information emitted by libraries that they use but do not own.
Storing a context inside the logger seems wrong. What are the semantics even supposed to be? Cancel the logger? I'm honestly confused about what exactly I'm getting out of calling logger.WithContext(ctx)
?
Now, passing the logger inside a context is certainly extremely common. Not sure that needs a standard library function necessarily, but it will appear in most codebases either way. Generally, they should be dependency-injected I guess.
If the logger implementation needs access to the loggers attributes, why not add an API to the logger retrieve them, instead of passing the whole context?
Storing a context inside the logger seems wrong. What are the semantics even supposed to be? Cancel the logger? I'm honestly confused about what exactly I'm getting out of calling logger.WithContext(ctx)?
Context is not only for cancellation, it may contain any request specific data including deadline. In that case it needs to associate data from context with every log record (imagine tracing information, a/b testing values, etc.).
Now, passing the logger inside a context is certainly extremely common. Not sure that needs a standard library function necessarily, but it will appear in most codebases either way. Generally, they should be dependency-injected I guess.
This seems very wrong, though may be very common. The logger in general is associated with a code location not with a request (as context).
If the logger implementation needs access to the loggers attributes, why not add an API to the logger retrieve them, instead of passing the whole context?
Because then every developer needs to do that manually (grabbing the attributes), and in case of let's assume the attributes are in multiple places inside the context this becomes very ugly, instead you pass the context and you have a Handler (processor) that every library that has "attributes" in the context can extract them and add them to the log record. This way the usage becomes very simple, you just pass the context all the time, and then install which Handler you want that grabs what attributes you want from the context.
Re. passing context to enabled:
we want
logger.WithContext(ctx).Info(...)
to be alloc-free
Ah, that makes sense. This helps me understand why the Logger has the context field on it.
- The context might be nil, so the implementation of Enabled couldn't rely on it.
- By Go convention, a function that takes a context as a first argument can expect that context to be non-nil, and should obey context cancellation. Neither applies here. (We get around that for Handler.Handle by putting the context in a Record instead of passing it as an arg.)
The API does seem awkward. Most users of slog would never notice... so maybe an ugly struct RecordInfo { Context context.Context; Level slog.Level }
is acceptable?
- Enabled is supposed to be very fast, and context lookup isn't. (Though it's certainly fast compared to outputting a log event.) The only use case that comes to mind—disabling logging by request—would be done better by creating a new Handler at the top of the request. Relevant values could be fetched from the context at that time.
I wrote a little benchmark to explore the performance of context lookup, Enabled
, `Handle, and outputting logs in https://gist.github.com/jellevandenhooff/8eb0830d7c183dc91b6a2385afd8d1de#file-slog_test-go:
On my machine, a simple enabled log line takes ~730ns; a disabled log line takes ~22ns; a log line disabled in Handle
based on a context value takes ~380ns; a log line disabled in Enabled
based on a semi-realistic context value takes ~40ns. Given these numbers, doing a context lookup for every disabled log line is not much more expensive, especially since it would be opt-in behavior by a handler.
So the question in my mind is, are there use cases that outweigh those concerns?
I think enabling package-local/struct-based loggers based on a context value is a valid use-case, for applications that do not want to pass loggers around. The performance overhead to me seems reasonable, especially since it would be opt-in. The API wouldn't be pretty but it might be okay.
(As an aside, if context.Value becomes too expensive, an optimization deep in the context guts might be store a small map or array for frequently-used keys. I couldn't find an issue suggesting that yet so perhaps context.Value really is quite fast.)
@jellevandenhooff
(As an aside, if context.Value becomes too expensive, an optimization deep in the context guts might be store a small map or array for frequently-used keys. I couldn't find an issue suggesting that yet so perhaps context.Value really is quite fast.)
I've wondered about that as well. With something like:
type Trace struct {
context.Context
*slog.Logger
}
It's possible to type assert from context.Context
to a Trace
, because a context.Context
is an interface. I'm not sure I'm suggesting that this is a great idea, but ... I think the nature of context
, slog
, and any kind of tracing starts to integrate different data models.
context
is sort of overwrite-oriented and uses keys for isolation and security, and can have edge cases around persistence and cancellationslog
is append plus namespacing - more things are possible, but generally sort of eager and one-way onlyslog
in the dynamics of the data model, but as I understand a bit more dynamic (e.g. it might not make sense to immediately commit an observation of a span or a metric to a Handler
).The combinatorial nature of the way these data models / implied concerns can interact feels ... elaborate; I wonder if an API coupling tracing and context makes more sense than logging and context.
Just to be clear, I think the ideal approach from OpenTelemetry's perspective would be to add context.Context as a required parameter to Info/Warn/etc functions (both top-level ones that work on the default Logger and the Logger methods).
I came here to say this myself.
I am currently instrumenting a project with OpenTelemetry and after reading the spec for logs, it seems important to always include the context in each log statement.
I also think the use case can be generalized.
OpenTelemetry clearly needs this ability since it would very much like to add the TraceID
and SpanID
to the log record. But even in a typical web service, I can think of situations where this would be useful.
While the above examples may be more useful for traces, it is also very helpful to have in logs especially when there is no full tracing setup.
Including a context on log statements feels correct. The documentation says:
// Use context values only for request-scoped data that transits
// processes and API boundaries, not for passing optional parameters to
// functions.
Dynamic, parent-set log attributes fit the description.
Since cancellation signals are ignored, it may seem like using context.Context
is not the right tool, but this is only because context.Context
serves a dual purpose in Go. If hypothetically we had split Cancel
and Values
, then it would be easier to make the distinction.
If the log/slog
package does not require a context in the logging functions, I believe that passing the logger through the context will become even more popular as there is no other way to pass dynamic parent-generated values, and requiring WithContext
on every call is likely going to be forgotten.
My concern is that the currents slog design does not define the idiomatic usage in a way that will result in context-preserving logging.
Correct, logging does not necessarily occur in the context of a context, and a logging package in the stdlib should not assume that a context is available.
edit: I'm not sure why anyone would thumbs-down this comment, as it should be an uncontroversial statement of fact.
Any place to redirect logger-in-context related discussion, since it's not part of this proposal anymore? Or do we want to keep it here in case of re-including log-in-ctx back to proposal?
//Hand Error
func main() {
f, err := os.Open("/test.txt") :e {
fmt.Println( e )
}
f, err := os.Open("/test.txt") :e HandEorr( e )
//or
f, err := os.Open("/test.txt") e: {
fmt.Println(e)
}
f, err := os.Open("/test.txt") e:
}
I have spent many hours over the last few weeks reviewing this proposal, including a 75 minute video chat with the author. Thanks for taking the time, @jba, it was a privilege.
This is a big proposal that I think requires contemplation and experimentation for an extended period of time before we bring its API under the Go 1 compatibility promise. If successful, slog has the potential to become a highly dependend upon package. We should give the community plenty of time to gain experience with slog and provide feedback based on that experience.
Although, I think the current design is a good start, I think there are use cases it doesn't serve well enough yet that probably haven't been voiced by anyone. I also suspect there are rough edges we have yet to discover that deserve more time to sand down. I don't think we will discover all the rough edges until enough people have tried to use slog on real projects at realistic scale. We haven't had enough time to do that yet.
I have several observations and suggestions to share that I hope will aid that effort.
I think a structured logging package that aims to be widely adopted should treat flexibility as the highest priority design criteria. Projects that use structured logging often do so because they intend to collect, search, aggregate, and analyze logs in a centralized way. In my experience, applications operating in these environments must often produce logs that meet strict formatting and content requirements. Key names and value formatting must be compatible with the down stream log analytics system. These requirements are often arbitrary and based on historical precedent. A logging library that makes it difficult or impossible to meet those requirements will cause frustration or will not get used.
A flexible structured logger should:
Each of these capabilities should be decoupled from the rest. In particular it should be easy to compose cross cutting logging policy middleware with any log formatter.
After flexibility comes ergonomics, and finally performance.
I think the current slog Handler and Record designs are less flexible and composable than they should be and we should see if we can improve on that.
I don't see a way to implement a middleware slog.Handler
that modifies the attributes of a Record
in some way before passing them on to another Handler
. It would need to create a new record with the modified attributes, but there is no good way to do that. slog.NewRecord
is innappropriate because the handler would not know what to pass to the calldepth
parameter. Record.Clone
avoids that problem, but also copies all the attributes, which are read-only, so the handler cannot change them. There should be some way to get a copy of a record without its attributes so that a handler could construct a new attribute set for the new record according to its needs.
If we could write Handlers that can modify the attributes of a Record, then we could almost write handlers that generalize the ReplaceAttr
option provided by the built in slog Text and JSON handlers and eliminate ReplaceAttr from the design.
We could only almost eliminate ReplaceAttr, because a middleware handler would still have some difficulty modifying the implied attributes carried by the Time
, Message
, and Level
fields of the Records
. It could easily modify their values, but controlling their key name isn't so easy. If we convert those three fields into full blown Attr
s on the Record then middleware Handlers could treat them exactly like any other log data. If we did that then we would probably want to add an ability to set the default keys names for those three fields on an slog.Logger
. Getting the keys right from the moment they are created is more efficient than creating them with the wrong key and requiring middleware to copy and rename them.
I think making the changes suggested in this section would be a big win for flexibility, composability, and orthogonality of Handlers.
Consider replacing Record.SourceLine() (file string, line int)
with Record.Frame() runtime.Frame
. As a Handler implementer I would want access to all the fields of the Frame, not just the file and line number. I can't think of any reason to limit access to only those fields and thus limit the possibilities for Handler implementations.
Going even further, though, source code location is another implied Attr
on a Record that is controlled by a field on HandlerOptions
which is specific to the built in Handlers. Maybe it could be a LogValuer
that captures the necessary stack information at creation and logs the file and line later. The Logger could add that as a regular Attr to the Record it sends to the Handler if the application wants file:line info.
Removing the unexported pc uintptr
field from Record and moving the AddSource
configuration field from HandlerOptions
to Logger
would remove the sticking point with using NewRecord
in middleware handlers I described above. In addition, the Logger could avoid calling runtime.Callers
, which is not cheap, if it isn't needed. As far as I can see, slog.Logger currently calls runtime.Callers
on all logging code paths unless you disable it at compile time with the nopc
build tag.
The Handler interface docs do not specify when a LogValuer's value should be resolved by Handlers. This can lead to inconsistent values being logged when switching Handlers. In particular, there is no specification of when Handlers should resolve LogValuers passed to WithAttrs
. Should they be resolved eagerly, or with each call to Handle
? The built in handlers resolve them eagerly, but other handlers might find it easer to wait, and that can produce quite different results if the LogValue
method is not idempotent (e.g. https://go.dev/play/p/wFPw0VMaYvb).
I understand the built in handlers demonstrate how high performance loggers can be built with slog so they are eagerly preformatting the attributes passed to WithAttrs
. But should the user expect these two functions to log the same thing given the same arguments?
func foo(log *slog.Logger, v any) {
for i := 0; i < 5; i++ {
log.Info("", "v", v, "i", i)
}
}
func foo2(log *slog.Logger, v any) {
log = log.With("v", v)
for i := 0; i < 5; i++ {
log.Info("", "i", i)
}
}
Is it OK that refactoring from foo
to foo2
will work the same for all types of v
except some implementations of LogValuer
with some implementations of Handler.WithAttrs
? I think this may surprise people.
I think this is the strongest part of the proposal. Zap fields showed us how to reduce allocations when logging. I think slog Attr
and Value
are a step forward in ergonomics and utility form there. Exporting a separate Value type is a subtle but big improvement on the concept. It allows both the LogValuer
interface and GroupValue
ideas, both of which are improvements on zap's equivalents.
I do have a couple of feature requests, though.
For background, my current project at $work has a logging package that has a Field
type similar to slog.Attr
and zap.Field
. But our application functions that create log events do not call the primitive field constructors such as slog.Int
or slog.String
directly. We want to avoid human error in the key names (because our log analytics system is very picky). To help keep our log data consistent we have a data dictionary that we use to generate our analytics index definitions and also packages of Go functions that match it. Using slog types, one of the generated functions might look like:
func SubComponent(value string) slog.Attr {
return slog.String("sub-component", value)
}
This function allows developers to log a subcomponent in a type safe way and not need to remember the proper spelling of the key. (It's not shown above, but the doc comment on the function also contains useful information about this attribute to help us know what it is for, all of which is generated from the data dictionary.)
We call these "log helper functions", and I will refer to this idea below.
Use case: Sometimes log helper functions need to conditionally return an attribute that will not get logged. The most common is skipping a nil
value.
The doc comment on slog.Handler.Handle
currently includes the following advice:
// Handle methods that produce output should observe the following rules:
// - If r.Time is the zero time, ignore the time.
// - If an Attr's key is the empty string, ignore the Attr.
It suggests a convention that attributes with an empty key be treated like a zap.Skip
, but I think a Skip
Kind
is better for three reasons.
LogValuer
to return a value that won't be logged.Use case: We have some error types that have an embedded numeric error code that we want logged along with the error message. Our log format is currently a flat set of k/v pairs, without any nested namespaces, so we need the error message and code logged at the top level of the key space. We currently log this error type using our version of the zap.Inline
function that logs a list of k/v pairs without nesting them in another namespace.
slog already has some of the necessary pieces. slog.GroupValue
can hold a flat collection of attributes, but it's not clear how to log a GroupValue without nesting it inside another key. It's tempting to think an empty string key might work, but a Handler that follows the docs on Handler.Handle quoted in the previous section would then ignore the whole group.
I am sure many will disagree with this section, and that's fine, but maybe I can persuade a few more people. :)
I don't think a structured logging package should provide a default logger or top level functions that log to it. I first argued this case while developing the Go kit log package many years ago, and I haven't seen anything to change my opinion since.
Creating new "contextual" loggers with pre-defined attributes and passing them into lower level functions to ensure some data from the outer scope is present in all log events of the inner scope is a central feature of structured logging. Nearly all code outside of func main
that logs should be receiving a logger from a higher scope to ensure it includes any attributes added there. Providing a default logger and top level functions that operate on it seems of limited use and more likely to be used when they should not than otherwise.
We've recently discovered this category of mistake in our code at $work and it was easy to overlook in code review. Most of our code uses log
as the variable name for the local logger and the package name (clog
in our case, slog
here) is only one letter different. So slog.Info
is easy to write when log.Info
should have been and the code will compile and appear to work. We've even found code that used both forms within a single function that went unnoticed until we deprecated and began removing the top level functions from our logging package. We have completed that refactor now and during the process we found essentially zero cases where a local logger wasn't available. In the few cases where a local logger wasn't available it was a bug. We also found some test code that was using the logging package instead of testing.T.Log
as it should have been.
log
I am not sure about the integration with the existing log
package. By "not sure" I mean something doesn't seem right to me, but maybe I haven't thought it through enough.
Why have slog
write to log
by default, but then have that flow reverse after calling slog.SetDefault
?
The ability to forward unstructured logs from a log.Logger
to an slog.Logger
is useful, many other structured logging packages have provided that ability, but why does slog
only provide that for log.Default()
and not make it easy to do the same thing for any log.Logger
?
I would like to connect a log.Logger
to a structured logger so I can assign the log.Logger
to the ErrorLog
field of an http.Server
. When an application has more than one http.Server
and it's important to differentiate their errors in logs, I'd want to give them each their own log.Logger
connected to slog.Loggers with attributes identifying the server instance.
I suspect these have been mentioned by others before. If so, sorry for repeating them.
Here is a copy of the first example of using slog
in the proposal.
func main() {
slog.SetDefault(slog.New(slog.NewTextHandler(os.Stderr)))
slog.Info("hello", "name", "Al")
slog.Error("oops", net.ErrClosed, "status", 500)
slog.LogAttrs(slog.ErrorLevel, "oops",
slog.Int("status", 500), slog.Any("err", net.ErrClosed))
}
Info
and Error
are the "message" and which are the k/v pairs. The slog.Error
line is particularly vexing. It looks like two k/v pairs, but it's really msg, err, k1, v1
. This is not good for readability at the call site. People will get used to it, but it becomes yet another special case to remember. Why do that to ourselves? Why make logging an error value work differently for otherwise similar functions?slog.ErrorKey
is yet another baked in key name that may need modifying to fit local conventions or requirements, even though it's not one of the fields on slog.Record
.slog.Error
puts the attribute for the err
parameter after all the other attributes in the Record. It's a small inconsistency, but it's there and none of the other logging methods do something like that.slog
Handlers require a ReplaceAttr function to replace the slog.MessageKey
with ""
, otherwise they still include msg=""
.@lpar, I'm not sure what a string prefix means for a structured logger. Do you mean a prefix on each key? You should be able to accomplish that with ReplaceAttrs.
More generally, having the handler constructors be methods on HandlerOptions means every variant JSON or text handler will need to have its own corresponding options struct, which seems inelegant.
If a handler wanted to use the slog.HandlerOptions
struct, its constructor could take it as an argument. There is no need to match the pattern used by the built-in handlers.
@seankhliao, https://go-review.googlesource.com/c/exp/+/463256 adds support for UnmarshalJSON, MarshalText and UnmarshalText for Levels.
Since we have to pass both a message and an error, this leads to having to call errors.New in cases where you don't have an err yet, where the error and message often boil down to the same string.
@beoran, you can use Log(slog.LevelError, msg, ...)
for that case.
We propose a new package providing structured logging with levels. Structured logging adds key-value pairs to a human-readable output message to enable fast, accurate processing of large amounts of log data.
See the design doc for details.