OpenHFT / Chronicle-Logger

A sub microsecond java logger, supporting standard logging APIs such as Slf & Log4J
http://chronicle.software/products/chronicle-logger/
Apache License 2.0
226 stars 63 forks source link

Chronicle-Logger time-based rolling and compression #57

Closed SPSiva153 closed 5 years ago

SPSiva153 commented 5 years ago

Chronicle-Logger rollover to compressed file(gzip)

If it supports time-based rolling, can we rollover every hour and how it can be compressed like filelogger? Below by xml configuration,

 <Properties>
    <Property name="name">chronicle-queue</Property>
    <Property name="logPath">logs/chronicle-log4j2/</Property>
    <Property name="pattern">[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n</Property>
</Properties>    

<Appenders>
    <Chronicle name="CHRONICLE">
        <path>${logPath}chronicle</path>
    </Chronicle>

    <RollingFile name="File" fileName="${logPath}${name}.log"
                 filePattern="${logPath}/${name}-%d{yyyy-MM-dd}-%i.log.gz">
        <PatternLayout pattern="${pattern}"/>
        <Policies>
            <TimeBasedTriggeringPolicy interval="3600" modulate="true"/>
            <SizeBasedTriggeringPolicy size="100MB"/>
        </Policies>
        <DefaultRolloverStrategy max="10"/>
    </RollingFile>
</Appenders>
dpisklov commented 5 years ago

I added a possibility to configure roll cycle:

        <Chronicle name="CHRONICLE">
            <path>${sys:java.io.tmpdir}/chronicle-log4j2/chronicle</path>
            <chronicleCfg>
                <rollCycle>HOURLY</rollCycle>
            </chronicleCfg>
        </Chronicle>

Roll cycle must be one of the ones defined in net.openhft.chronicle.queue.RollCycles (see more in Chronicle-Queue documentation):

    TEST_SECONDLY(/*---*/"yyyyMMdd-HHmmss", 1000, 1 << 15, 4), // only good for testing
    MINUTELY(/*--------*/"yyyyMMdd-HHmm", 60 * 1000, 2 << 10, 16), // 64 million entries per minute
    TEST_HOURLY(/*-----*/"yyyyMMdd-HH", 60 * 60 * 1000, 16, 4), // 512 entries per hour.
    HOURLY(/*----------*/"yyyyMMdd-HH", 60 * 60 * 1000, 4 << 10, 16), // 256 million entries per hour.
    LARGE_HOURLY(/*----*/"yyyyMMdd-HH", 60 * 60 * 1000, 8 << 10, 64), // 2 billion entries per hour.
    LARGE_HOURLY_SPARSE("yyyyMMdd-HH", 60 * 60 * 1000, 4 << 10, 1024), // 16 billion entries per hour with sparse indexing
    LARGE_HOURLY_XSPARSE("yyyyMMdd-HH", 60 * 60 * 1000, 2 << 10, 1 << 20), // 16 billion entries per hour with super-sparse indexing
    TEST_DAILY(/*------*/"yyyyMMdd", 24 * 60 * 60 * 1000, 8, 1), // Only good for testing - 63 entries per day
    TEST2_DAILY(/*-----*/"yyyyMMdd", 24 * 60 * 60 * 1000, 16, 2), // Only good for testing
    TEST4_DAILY(/*-----*/"yyyyMMdd", 24 * 60 * 60 * 1000, 32, 4), // Only good for testing
    SMALL_DAILY(/*-----*/"yyyyMMdd", 24 * 60 * 60 * 1000, 8 << 10, 8), // 512 million entries per day
    DAILY(/*-----------*/"yyyyMMdd", 24 * 60 * 60 * 1000, 8 << 10, 64), // 4 billion entries per day
    LARGE_DAILY(/*-----*/"yyyyMMdd", 24 * 60 * 60 * 1000, 32 << 10, 128), // 128 billion entries per day
    XLARGE_DAILY(/*----*/"yyyyMMdd", 24 * 60 * 60 * 1000, 128 << 10, 256), // 4 trillion entries per day
    HUGE_DAILY(/*------*/"yyyyMMdd", 24 * 60 * 60 * 1000, 512 << 10, 1024), // 256 trillion entries per day
    HUGE_DAILY_XSPARSE("yyyyMMdd", 24 * 60 * 60 * 1000, 16 << 10, 1 << 20), // 256 trillion entries per day with super-sparse indexing

Chronicle-Queue does not support compression as such so this is something you can implement using cron job. However, after roll, Chronicle Queue will trim the cq4 file so it will become much smaller naturally (assuming you are using latest version of the queue)

hft-team-city commented 4 years ago

Released in Chronicle-Logger-4.19.30, BOM-2.19.184

hft-team-city commented 4 years ago

Released in Chronicle-Logger-4.19.30, BOM-2.19.185

hft-team-city commented 4 years ago

Released in Chronicle-Logger-4.19.30, BOM-2.19.187