tokio-rs / tracing

Application level tracing for Rust.
https://tracing.rs
MIT License
5.39k stars 707 forks source link

max_log_files can‘ work #2937

Open laiyi55 opened 5 months ago

laiyi55 commented 5 months ago

Bug Report

Version

tracing = "0.1.40" tracing-subscriber = {version = "0.3.18", features = ['env-filter', 'local-time']} tracing-appender = "0.2.3"

Platform

macos、Android

Description

I found max_log_files not work

let file_appender = RollingFileAppender::builder()
            .rotation(Rotation::MINUTELY) // rotate log files once every hour
            .max_log_files(2)
            .filename_prefix("box_tracing") // log file names will be prefixed with `myapp.`
            .filename_suffix("log") // log file names will be suffixed with `.log`
            .build(file_path) // try to build an appender that stores log files in `/var/log`
            .expect("initializing rolling file appender failed");

I try to find reason, and I found remove file only write->refresh_writer->prune_old_logs.

impl io::Write for RollingFileAppender {
    fn write(&mut self, buf: &[u8]) -> io::Result<usize> {
        let now = self.now();
        let writer = self.writer.get_mut();
        if let Some(current_time) = self.state.should_rollover(now) {
            let _did_cas = self.state.advance_date(now, current_time);
            debug_assert!(_did_cas, "if we have &mut access to the appender, no other thread can have advanced the timestamp...");
            self.state.refresh_writer(now, writer);
        }
        writer.write(buf)
    }

    fn flush(&mut self) -> io::Result<()> {
        self.writer.get_mut().flush()
    }
}

So can we add 'prune_old_logs' to 'Inner::new'? If write can't make file rolling, and file can't remove. I hope we can trim file when init tracing. By the way, can we set file size? I found some issue, but no code use

kaffarell commented 5 months ago

I tried your example just now and it works perfectly. Creates a new file every minute and deletes the old one, so that there are always only two files.

In which case does your file rolling 'not work'? Even if it somehow fails, I think it's unwise to start deleting files without having rolled over.

Regarding the rolling over at a specific file size, there are a few PRs open: #2904 and #2497 both of which have not been merged yet!

laiyi55 commented 5 months ago

I agree when every minute we write log, creates a new file every minute and deletes the old one but it is my situation: I set roll file every day and max 2 files, but my progress start on 6:00 am and close on 7:00 am.So every day my progress start it will create new file, and when I write log, should_rollover will return false so refresh_writer is not running and prune_old_logs is not running. My files size will be over 2

kaffarell commented 5 months ago

The thing is that we first call prune_old_logs and then create_writer (which creates the new file). This means that there are three files until the next rollover happens. I get that this can be confusing, but I think this is the correct behavior. If you set max_files it is the max log files that you currently have and I don't think it should include the file to which we write right now. Maybe we should improve the documentation on this point.

laiyi55 commented 5 months ago

I have different idea about it that we first call prune_old_logs and then create_writer prune_old_logs is in write or make_writer,but create_writer is in Inner::new(which call RollingFileAppender::builder().build()) So I think first call create_writer and then prune_old_logs?

laiyi55 commented 5 months ago

It is my test:

        let (_file_guard, _console_guard) = box_tracing::BoxTracing::new().init("/Users/laiyi/AndroidStudioProjects/box_core/", true);
        tracing::debug!("start");
pub struct BoxTracing {}

impl BoxTracing {
    pub fn new() -> BoxTracing {
        BoxTracing {}
    }

    pub fn init(&self, file_path: &str, is_debug: bool) -> (WorkerGuard, WorkerGuard) {
        let local_time = OffsetTime::new(
            UtcOffset::from_hms(8, 0, 0).unwrap(),
            format_description!(
                "[year]-[month]-[day] [hour]:[minute]:[second].[subsecond digits:3]"
            ),
        );

        let file_appender = RollingFileAppender::builder()
            .rotation(Rotation::MINUTELY) // rotate log files once every hour
            .max_log_files(2)
            .filename_prefix("box_tracing") // log file names will be prefixed with `myapp.`
            .filename_suffix("log") // log file names will be suffixed with `.log`
            .build(file_path) // try to build an appender that stores log files in `/var/log`
            .expect("initializing rolling file appender failed");
        let (file_non_blocking, _file_guard) = tracing_appender::non_blocking(file_appender);

        let file_subscriber = tracing_subscriber::fmt::layer()
            .with_ansi(false)
            .with_writer(file_non_blocking)
            .with_timer(local_time.clone());

        let (console_non_blocking, _console_guard) = tracing_appender::non_blocking(ConsoleWriter);
        let console_subscriber = tracing_subscriber::fmt::layer()
            .with_ansi(false)
            .with_writer(console_non_blocking)
            .with_timer(local_time.clone());

        let level = if is_debug {
            tracing::Level::DEBUG
        } else {
            tracing::Level::INFO
        };
        let subcribe = tracing_subscriber::registry()
            .with(EnvFilter::from_default_env().add_directive(level.into()))
            .with(file_subscriber)
            .with(console_subscriber);
        tracing::subscriber::set_global_default(subcribe).unwrap();

        return (_file_guard, _console_guard);
    }
}

struct ConsoleWriter;

impl std::io::Write for ConsoleWriter {
    fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
        let buf_len = buf.len();
        let string = String::from_utf8_lossy(&buf); 

        box_log("BoxLog".to_string(), string.to_string());
        Ok(buf_len)
    }

    fn flush(&mut self) -> std::io::Result<()> {
        Ok(())
    }
}

#[cfg(target_os = "macos")]
pub fn box_log(tag: String, format: String) {
    println!("{} {}", tag, format);
}

#[cfg(target_os = "android")]
pub fn box_log(tag: String, format: String) {
    unsafe {
        __android_log_write(
            LogPriority::DEBUG as _,
            CString::new(tag.clone()).unwrap().as_ptr(),
            CString::new(format.clone()).unwrap().as_ptr(),
        );
    }
}

every minute I run this progress, the file will never remove

kaffarell commented 5 months ago

Oh, you are right. Though one could argue this is a weird use-case... When creating a new RollingFileAppender we instantly try to open the current file (prefix + time) and if that doesn't work we create one. What if, when we fail to open the file at startup, we do nothing and let the FileAppender rollover normally on the next log call. This would allow the old files to get cleared.

laiyi55 commented 5 months ago

But it is normal case on App and I try prune_old_logs at last it does work.By the way, entry.create is not supported on Android, so I change sort by file_name. Do you have any better idea? files.sort_by_key(|p| p.file_name());

wangshangming commented 1 month ago

But it is normal case on App and I try prune_old_logs at last it does work.By the way, entry.create is not supported on Android, so I change sort by file_name. Do you have any better idea? files.sort_by_key(|p| p.file_name());

may be use mtime is better than create time, create time is not support for older linux version https://stackoverflow.com/a/76948800/11131796

kaffarell commented 1 month ago

may be use mtime is better than create time, create time is not support for older linux version

Yes, #3000 should fix this.