OriginTrail / ot-node

OriginTrail Decentralized Knowledge Graph network node
https://origintrail.io
Apache License 2.0
199 stars 72 forks source link

Improvement proposal for logging #1725

Closed UniMa007 closed 2 years ago

UniMa007 commented 2 years ago

Issue description

I've been running a bunch of nodes over the weekend and published roughly 70-100k jobs. After checking the logs, I saw that my log file has grown to 17,15GB and the intermediateFile.log grew to 3,02GB. This severly slowed down my VPS SSD. Since my publishing application ran into a few problems I tried to evaluate the node logs, which made it almost impossible given the large size.

image

From spring-boot, ELK Stack and alike, I've made a very good experience using rolling logs per day (and per size) for maintaining evaluatable logs. However, I saw that you provide a Papertrail config file. So this proposal might already be on your priority list, then just ignore it :)

I'm from SpringBoot/DevOps context, so I'm not too deep into JS/NodeJS world, but I saw you are using Pino, so this might be a link that could potentially help

https://github.com/pinojs/pino/blob/master/docs/help.md#rotate

Expected behavior

Actual behavior

Steps to reproduce the problem

1. 2. 3.

Specifications

Contact details

Error logs

Disclaimer

Please be aware that the issue reported on a public repository allows everyone to see your node logs, node details, and contact details. If you have any sensitive information, feel free to share it by sending an email to tech@origin-trail.com.

Valcyclovir commented 2 years ago

There were several ways to minimize and control log sizes that were present in V5, which I expect should also be the case for V6. Being on testnet now seems to be the reason logs aren't maintained right now.

In a docker environment, docker create -i --log-driver json-file --log-opt max-size=1g limits logs to 1g

in a dockerless environment, running the node as a service journalctl --vacuum-size=100M limits logs to 100M

Y0lan commented 2 years ago

Maybe adding an option into the origintrail_noderc to clean the log every x days ?

Valcyclovir commented 2 years ago

I have a cronjob that runs a script removing the following logs:

rm -r /var/log/journal/ rm -rf ~/ot-node/logs/ rm -rf ~/backup

I ran this ever since otnode and graphdb is now a service (and MariaDB for those running on Arch Linux) journalctl --vacuum-size=200M

outdated; forever cleanlogs rm ~/ot-node/out.log

option in .origintrail_noderc sounds good. Either that or a cronjob should do the job.

Y0lan commented 2 years ago

If the team is interested, I'll implement the new option for cleaning the logs. @NZT48 ?

Valcyclovir commented 2 years ago

Just to add that the backup folder also adds up very quickly and I am not sure about its relevancy for now ? Worth to keep the backup size in check. Thanks

NZT48 commented 2 years ago

@Valcyclovir, @UniMa007 We have prepared this instructions and script for cleaning up ot-node #1746, also we implemented changes so the logs won't be so big, rotation without storing old files and running ot-node as service. But thanks for feedback on backup, need to look at it, probably old logs have ended there after update and that's why is so big, and it should not become bigger after this release.

@Y0lan thanks for suggesting, but I think there is no need for that, if you want to be involved in development, jump to discord to discuss, maybe I can help you where to start