Closed johnagl closed 3 years ago
update for this: i have the pino logs set up for both docker and local machine (local is colored coded, its a little tricker to do this with docker but i assumed it wasn't a big deal)
but i'm reading more about logging and getting a bit confused lol. when it comes to development, it seems that logging in stdout is ok, but for production + docker, logging has to be handled a little differently.
for production, i see recommendations for outputting to json logs and for docker, i see that most people recommend staying away from logger libraries because of something called docker logging drivers / docker agents which do the same job for deploying with docker.
but since it works anyway, should i even worry about that right now?
Oof... Okay I'm not very familiar with logging in general especially for production. @bobheadxi do you have any thoughts on this?
@resurreccionl the advice you are finding might make more sense with some context:
docker logs
works on stopped containers). There is software out there that can manage this data for you asynchronously (known as log drivers in Docker, such as docker-gcp-logsdriver
)How does this play into what you are trying to set up? tl;dr just use a log formatting library like Pino and don't worry about the rest yet - in dev, you can use the human-friendly output mode in your screenshots, and in prod use the JSON output mode. When it comes time to deliver your logs to an external service, look for something that can pick up JSON logs from Docker. You might not even have to do this, depending on your infrastructure - some VPS services treat all output on an instance as logs, and automatically exports that for you!
Now, why JSON logs? Logs play an important role in the debugging component in a topic called observability, which is how you triage problems once you know there is a problem (knowing if there is a problem is called monitoring) and the format of your logs should be designed help you out.
id="abcde"
)lvl="error"
)logger="db"
)How this all works out in practice: for example, in Pino you might create child loggers to encapsulate useful fields and context:
// per-module
const moduleLogger = logger.child({ logger: 'scraper' })
// per-request
const requestLogger = logger.child({ requestID: 'abcde' })
Then, you can have a "parent" logger from which all child loggers are derived that:
NODE_ENV=development
You then pass these loggers around as parameters to maintain context in your log messages.
Happy to clarify more if needed!
I love Pino, and my Go recommendation is [zap](https://github.com/uber-go/zap), alongside [shameless plug for my zap extensions library](https://github.com/bobheadxi/zapx) 🙃 I also worked on a lot of the [infrastructure and documentation behind Sourcegraph's observability stuff](https://about.sourcegraph.com/handbook/engineering/observability), and also did some [fun monitoring stuff at Riot Games](https://bobheadxi.dev/evaluable-expressions/)
@bobheadxi wow thanks so much, this helps a lot! i saw some of what you've mentioned in isolated bits and pieces while researching, but this really put it all together.
That was a really good explanation of logging @bobheadxi and underscores the importance of logging in general. Thank you!
Right now we're using console.log() to display relevant information in the terminal. Use something else such as https://www.npmjs.com/package/pino.
We should also look into making the linter complain if it sees console.log().