Closed Dhuran closed 2 years ago
Should this option prevent goaccess to keep writing to it? or should it just stop processing data and quit? I just wonder how should goaccess behave once it has reached its storage limit.
Thanks
Thank you for fast reply! The most perfect way is to delete data older then X days and keep writing new data to disk. Is it possible ?
Not sure if it would be possible since currently it doesn't timestamp every record saved in the DB. Have you tried compressing the data using --compression=<zlib|bz2>
? should help though.
As far as i read there will be the check for already imported data with a timetamp, so i guess that once that function is implemented would be also possible to remove old data.
In that case would be also nice to have an option to set how back in the past the log should be analyzed during the log parsing.
Not sure if it would be possible since currently it doesn't timestamp every record saved in the DB. Have you tried compressing the data using --compression=<zlib|bz2>? should help though.
It will reduce summary size of DB files, but it isn't a solution for initial problem. I.e. even if compression is enabled, DB will grow over time.
Should this option prevent goaccess to keep writing to it? or should it just stop processing data and quit? I just wonder how should goaccess behave once it has reached its storage limit.
I vote also for scenario "delete data older then N days". It would be nice to have such feature in goaccess.
Actually there are some plans to refactor the on-disk storage so I'll look into this and add it as one of the new features. Thanks.
Hi there, I just started using goaccess and it is a really nice program. Unfortunately I ran into the same problem as the original issue filer. I checked the changelogs of the last two releases but I couldn't find a feature "delete data older than N days". @allinurl Did you already add it and I missed it? Thank you for all your work!
Nils
@nils-schween I haven't as I plan to replace the existing on-disk storage with something more robust. Hoping to add this when the storage engine is replaced. Thanks for the reminder.
Thanks for the answer. Then, I'll just wait and check regularly for new versions.
@allinurl Any updates on this. I am also looking for this feature. This topic hasn't been updated for awhile, but I am not seeing any updates in the Documentation.
My suggestion is to add the option to set the --persist value.
Ex.
--persist=31
And another option.
Ex.
--purge=31
The value for both being days.
If you add a value to --persist ex 31 then at the end of the process goaccess should remove any entries older then 31 days. However, if you don't want to add any delay in the process then you would use --persist without any value and then you could run the --purge option during a cron in the off hours, etc.
Thoughts?
@ggedde Could this be achieved with --keep-last=<num_days>
?
@allinurl Oh, sorry I missed that. I will give that a try.
Would that then address this issue? Or are you keeping it open because you're still working on revamping the database or addressing the original issue another way? ie size-limit?
@ggedde actually it does address the issue, the DB was revamped in v1.4, so I'm closing this. Thanks for the heads up on this.
Also thank you so much for the very generous donation, greatly appreciated!
@allinurl No, Problem. Thanks for the awesome software and your quick replies!
Hello!
We are using configuration with database, load-from-disk and keep-db-files. In some cases database size can grow up to 8G or more. Is there any possibility to set maximum database size? So we can keep disk usage under control without removing goaccess database files and losing all of previously stored data. I could not find any options similar to "size limit", "database size" or "expires" in the official man page.
Thank you!