Open daniele77 opened 1 year ago
Or remove the break. The problem is that in both cases it might take long time. Depending on the max_files_
value.
You can't remove the break: when there are less than maxfiles files, the while loop can't exit. The solution I was proposing does not take a long time of execution.
I think the cleaning function could be fixed to stop relying on the filenames_q_
and search for old files to delete instead.
If this fixed, the entire filenamesq mechanism can be removed, which would simplify the code significantly.
PR is welcome.
I agree. I'm pretty busy right now, so I'm not sure I'll be able to submit a PR anytime soon.
I was thinking that we can't guarantee that there won't be a ton of files to look through, so what if we just allowed a certain number of missing files in the search, something like:
int allowed_missing = 32;
while (filenames.size() < max_files_)
{
auto filename = FileNameCalc::calc_filename(base_filename_, now_tm(now));
if (path_exists(filename))
{
filenames.emplace_back(filename);
} else if (--allowed_missing <= 0)
{
break;
}
now -= std::chrono::hours(24);
}
I know it' not very robust, but is that a sensible compromise? Otherwise, I'd be willing to implement searching for files to remove. It seems like we could repurpose code from count_files()
to do this.
@andy-byers this could be a partial solution, but you should have the const allowed_missing
big enough to take into account the case when one starts the application a few months after the last run (e.g., an embedded system that you use just once in a while).
To manage the general case, you would end up with an allowed_missing
so big that every time the algorithm must check hundreds of paths before stopping. And this would happen when there is no gap in the log files, as well.
So, what's the worse between:
I've no doubt the best solution is to check all the files in the directory for log files name.
I've no doubt the best solution is to check all the files in the directory for log files name.
I second that.
@daniele77 and @gabime: that makes sense to me. I'll start a fork and see what I can do. Thanks!
If checking the name of all the files in the directory is inevitable maybe it's worth to divide maxfiles into two different N:
This will extend sink options and simplify logic for creator (date or number but not both ). Strategy for each N can be injected into daily_file_sink constructor.
If for any reason one of the old files is missing (e.g., because one day the application has not logged), the oldest file is no more removed. For example, having these files:
myapp_2022-11-21.log and myapp_2022-11-20.log won't be never deleted.
This function causes the issue:
https://github.com/gabime/spdlog/blob/v1.x/include/spdlog/sinks/daily_file_sink.h#L183
I guess one fix could be to change the function so that it enumerates the files with the right pattern and remove the oldest one based on the file timestamp.