Closed freakmaxi closed 7 months ago
Hmm, this aligns better with what I've seen in the past but I'm not sure how I feel about it. IMO JSON logging is mostly for being parsed by tools such as ELK stack and not by users eyes.
Hmm, this aligns better with what I've seen in the past but I'm not sure how I feel about it. IMO JSON logging is mostly for being parsed by tools such as ELK stack and not by users eyes.
To be honest, we are using it in multiple ways. One is checking the log outputs on the pod console in Kubernetes, which is the first and quick eye inspection and which is hard to capture which output is about what, and those logs are collected by fluent and pushed to elastic. We can query using Kibana (no problem here). However, also we have an exporter, which collects console logs and pushes them to an S3 bucket to analyze them for both user inspection and automated analysis. User inspection part is a bit painful because of the field order.
All modified and coverable lines are covered by tests :white_check_mark:
Comparison is base (
97c1347
) 61.23% compared to head (ee30d2b
) 61.12%.
:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
I've changed the order of the fields for JSON output for better observability when logs are dumped in a file and require eye inspection.
I believe the following order
time - level - message - (rest of the log fields)...
for the logs output would be better to catch the desired logs...