azlux / log2ram

ramlog like for systemd (Put log into a ram folder)
MIT License
2.62k stars 193 forks source link

log2ram 40M 40M 0 100% /var/log #132

Open Gil80 opened 3 years ago

Gil80 commented 3 years ago

running # df -h log2ram 40M 40M 0 100% /var/log

It means that area in the RAM allocated for logs is at 100% capacity, shouldn't it be flushed to the SDcard?

eslavko commented 3 years ago

Have similar problem. If log is bigger the data is lost without prompt. There should be some trigger that if log2ram is full the log should be commited and not waiting daily cron job.

azlux commented 3 years ago

log2ram work like a separated partition. Like all partition, the system manage it. If it's full, no program can write on it. Best practice is to increase the SIZE into the conf, but also to manage log rotation to avoid having a too big log folder.

Gil80 commented 3 years ago

"but also to manage log rotation to avoid having a too big log folder" how do I do this?

azlux commented 3 years ago

@Gil80 Never hear of logrotate ? It's by default on Debian. - mini tuto if you have big logs growing all the time, you need that to have rotation.

reloxx13 commented 3 years ago

check what log has gotten too big (ls -lah /var/log) and:

ull have to play with those values (days, size,...)

set a slight higher log size of log2ram (only recommend devices with enough ram like pi4). 40MB is way tooooooooo less and it should be mentioned in the readme (to edit the logrotate).

empty all logs with for logs in `find /var/log -type f`; do > $logs; done rm -rf /var/log/*.log.* rm -rf /var/log/*/*.log.*

ps: do at own risk

andreabradpitto commented 2 years ago

For future reference, the way to empty (not delete) logs proposed by @reloxx13 is really sensible. Earlier today I had the bad idea to simply delete everything in /var/log, which was a big mistake. I managed to restore my folder since then, but I still have the same issue as OP.

I investigated further and it seems that the cause is the enormous number of failed login attempts from bots all over the world, as all my traffic on port 22 is redirected to my Raspberry. Indeed, my /var/log/auth.log* and /var/log/btmp* files become huge just in few days.

I disabled SSH password authentication in order to solve the security issue (I now only resort to SSH Public Key Authentication), but the log spam is persisting. Do you think I should tinker with logrotate default scheduling, or are you aware of a better solution for this specific case? By the way, I had already set 80MB as the size for Log2Ram.

Sorry if I bother you with this question 2 years later, but I think it may also help other people in the future

reloxx13 commented 2 years ago

For future reference, the way to empty (not delete) logs proposed by @reloxx13 is really sensible. Earlier today I had the bad idea to simply delete everything in /var/log, which was a big mistake. I managed to restore my folder since then, but I still have the same issue as OP.

I investigated further and it seems that the cause is the enormous number of failed login attempts from bots all over the world, as all my traffic on port 22 is redirected to my Raspberry. Indeed, my /var/log/auth.log* and /var/log/btmp* files become huge just in few days.

I disabled SSH password authentication in order to solve the security issue (I now only resort to SSH Public Key Authentication), but the log spam is persisting. Do you think I should tinker with logrotate default scheduling, or are you aware of a better solution for this specific case? By the way, I had already set 80MB as the size for Log2Ram.

Sorry if I bother you with this question 2 years later, but I think it may also help other people in the future

Change the default ssh port and i set max 10mb in logrotate and daily rotation. This is completly up to u and ur needing.