maraisr / diary

📑 Zero-dependency, fast logging library for Node, Browser and Workers
MIT License
255 stars 7 forks source link

Build babel plugin to enhance context #5

Closed maraisr closed 2 years ago

maraisr commented 3 years ago

As abit of background, diary has been largely inspired by the logging in "backend" world—such as go or rust. With the idea that you simply run log.debug with some message and move on. And then that message is enhanced with namespace in C# world, or the module in Rust and so on... Or any sort of context that helps a developer know where that came from. This automatic scoping is what makes this beautiful.

Diary was then built to support this "basic" api, feather light runtime and optimal performance; but still lacks this original design goal.

So with this issue I wish to introduce a babel (webpack loader??) to do exactly this. Am envisioning something like;

// input
import { info } from 'diary';

const DoWork = () {
    info('start doing work');
    // work
    info('stop doing work');
}

// output
import { diary } from 'diary';

const _scope_a = diary('npm-package:my-file:DoWork');

const DoWork = () {
    _scope_a.info('start doing work');
    // work
    _scope_a.info('stop doing work');
}

in that when this logs you get automatic scoping of these logs akin to that of go or rust.

There is however some caveats to this, right now the spinng up of diary is expensive as it creates the hooks pipeline, where when authoring this the user may not expect a file with 20fncs, to now have 20 diaries created.

Also what about nested functions?

// output
import { diary } from 'diary';

const _scope_a = diary('npm-package:my-file:DoWork');
const _scope_b = diary('npm-package:my-file:DoWork:DoOtherWork');

const DoWork = () {
    const DoOtherWork = () => {_scope_b.info('doing other work');};
    _scope_a.info('start doing work');
    // work
    DoOtherWork();
    _scope_a.info('stop doing work');
}

should we created a new diary for every scope?


An alternate approach to this could be simply enhance the orignal loggers with some "meta"—leaning on gzip to optmize this. eg:

// output
import { info } from 'diary';

const DoWork = () {
    const DoOtherWork = () => {info('doing other work', '__npm-package:my-file:DoWork:DoOtherWork');};
    _scope_a.info('start doing work', '__npm-package:my-file:DoWork');
    // work
    DoOtherWork();
    _scope_a.info('stop doing work', '__npm-package:my-file:DoWork');
}

with the thinking here that we use the last argument as the "meta"—so that the api for consumers stays consistent. (double underscore denoting our thing for those not using babel).

Pro here is we get the pipeline of hooks optimized so scoping diaries can still happen for those that care.


Also thinking here is to build adapters for React and such—to allow context about component/hook running etc. Maybe even open this api up to allow other constructs to feed context into things. Like xstate. This may be out of scope for this library, but can surely help build the capability.

maraisr commented 3 years ago

Keen to get some feedback on this one @lukeed @theKashey

lukeed commented 3 years ago

I think the auto-scoping is going to pose a problem. With ESM syntax & any standard bundler setup, the file-derived scope name won't match the source file's assumed name.

It doesn't really matter which scope you prefer either. If you have a some export/helper function being used in multiple locations, the helper itself may live in chunk.123.js and then be used in the page.home.js, page.about.js, and page.blog.js locations, each sharing the same npm:diary:helpers (eg) scope.

And if the scope name is tied to the function caller, then all scopes will end with :helper (the assumed utility export/function), which is probably fine & intended, but there's no Babel way (afaik) to track call sites like an error trace.

theKashey commented 3 years ago

My proposition was to go above "file" level. Mostly to accommodate splitters and lumpers needs, where different people tend to break down "the same thing" into the different amounts of files (or a file).

So, @maraisr, can we take a step back and not think much of how we want to do it. We know why want to do it, but what exactly?