Closed gajus closed 3 years ago
Wow Hi Gajus! Merry Christmas to you!
Thank you so much—yours as well. Love your idea behind contexts, makes loads of sense! (I have a branch where I'm adding that in)
I'd like to clarify a few things about the benchmarks, there was also a twitter thread about this around ulog
. Benchmarks don't give a lot of informational value because they are so vague, so definitely need to expand how its being benchmarked (or what it's benchmarking) quite a bit.
Some loggers will outperform others and such, so its best to benchmark from different points for sure.
However, what I wanted to benchmark specifically here, was check how fast it was to spin up a logger, add some middleware and log something. Diary has the idea that contexts or more module specifics would be encapsulated through that:
const log = diary('login-component');
export function Login(props) {
useEffect(() => {
return middleware(logEvent => logEvent.extras.push(props.realm), log);
}, [props.realm]);
// Assume this has more logs in and around other things.
const handler = useCallback(() => log.info('logging in'), []);
return (
<button onClick={handler}>Login!</button>
);
}
This is a primitive example, but shows the idea where you could bind a React prop to logging contexts.
Granted the way roarr
, or ulog
or any other might do this differently, or have different patterns, but my thinking was you'd want to have one logger per module (kind of like how Rust's logging system works).
Secondly, be able to pipe just that "child logger" through its own middleware, then a global set. Making it near endless possibilities for extensions. The benchmark only show's one log, sure, but in a real world app in say React, you'd probably create a child logger as the module installs, then maybe do one info
call when say someone clicks a button.
Which I do believe the test is doing, creating the logger, logging something then cleaning up. So what im really trying to benchmark is what impact logging will have as JavaScript modules install.
Also the use of the write method, I know my example just "taps" into that for validation purpose, so perhaps I could make it do more, like alter the message or something as well. And yeah, use the .write
to catch those messages for validation.
I say this though that there are definitely holes in my current implementation, which I do need to iterate over. Because from the example above, seeing as the middleware runs in the effect, it would be defined later, so module scope middleware wouldn't be aware of its insertions. So might need to introduce a "before" / "after" vibe.
All in all, I think that yeah. Ill update the benchmarks to support a single "setup" phase, and try the write method for validation. As well as introduce a proper middleware to alter the message.
Update: I tried to upgrade to v3, but was unsuccessful. The benchmark kept complaining about a
toJSON
field not existing. I will give it another stab in a few days, let holidays pass by.
However, what I wanted to benchmark specifically here, was check how fast it was to spin up a logger, add some middleware and log something. Diary has the idea that contexts or more module specifics would be encapsulated through that:
That makes sense. :-)
In that case, child
is the correct method to use. The only slight mistake is that child
must return the transformed message.
I will raise a PR.
The reason for the toJSON
was also because of not returning the message. I have added an early check for wrongful use of child
.
Here we go https://github.com/maraisr/diary/pull/3
Think #3 closes this.
Hey!
Your logging library looks great. And thanks for doing the Roarr vs Diary benchmark.
It highlighted few performance issues with Roarr.
Please upgrade Roarr to the latest version. :-)
Also, the benchmark is not properly setup. You are currently calling
log.child
on every test. This is pretty expensive operation and not something you would do in real-world before every log message.See this as an example of how benchmarks should be initiated:
https://github.com/gajus/roarr/blob/0f6330ed2e290cdf56faf69144834a361c1bf178/test/benchmark.js#L33-L37
Alternatively, you can simply do:
The current benchmarks are as follows: