splunk / splunk-connect-for-syslog

Splunk Connect for Syslog
Apache License 2.0
152 stars 107 forks source link

SC4S: F5 BIG IP Sourcetype=f5:bigip:crond without time stamp #2015

Closed Jo-Ng closed 1 year ago

Jo-Ng commented 1 year ago

Hi, since SC4S by default filter on parameters on incoming time stamp. We tried to re-establishing by the necessary format to ease the use case. We are facing issues when try to establishing it for the F5 that run on VMware. Please advice how could we get the time stamp on it.

1) In place the necessary template as below

cat /apps/sc4s/local/config/destinations/t_custom_time_f5_bigip.conf
template t_ custom_time_f5_bigip {
 template('$(format-json *)');
# template('${YEAR}-${MONTH}-${DAY}T${HOUR}:${MIN}:${SEC}${TZ} ${HOST} ${MSGHDR}${MESSAGE}');
};

2) Make changed on the splunk_metadata.csv

_cat /apps/sc4s/local/context/splunk_metadata.csv | grep f5*

#Auto recognize syslog type, index, custom index name to override the default
f5_bigip,index,sc4s_multi
f5_bigip_nix,index,sc4s_multi
f5_bigip,sc4s_template, t_custom_time_f5_bigip
f5_bigip_nix,sc4s_template, t_custom_time_f5_bigip_

3) I also in place “app-vps-f5_bigip.conf “ parser as below. But noticed it does not make anything, as the SC4S could recognised and parked it in to “f5:bigip*”

_cat app-vps-f5_bigip.conf

#/opt/sc4s/local/config/app-parsers/app-vps-f5_bigip.conf
#File name provided is a suggestion it must be globally unique

application app-vps-test-f5_bigip[sc4s-vps] {
 filter {
       host('xxxxxxxxx0*')
    };
    parser {
        p_set_netsource_fields(
            vendor('f5')
            product('bigip')
        );
    };
};_

Outcome It worked for the “sourcetyppe=f5:bigip:syslog” as below _2023-03-05T00:05:38+00:00 xxxxxxxsshd[25790]: pam_tacplus: tac_srv[1] addr=10.4.36.61 port=0 2023-03-05T00:05:38+00:00 xxxxxxxsshd[25790]: Disconnected from 10.4.200.143 port 32874 2023-03-05T00:05:38+00:00 xxxxxxxsshd[25790]: Received disconnect from 10.4.200.143 port 32874:11: Closed due to user request. 2023-03-05T00:05:38+00:00 xxxxxxxtmsh[25798]: 01420002:5: AUDIT - pid=25798 user=svc.netautorw.001 folder=/Common module=(tmos)# status=[Syntax Error: unexpected argument "r1=$(ssh"] cmddata=r1=$(ssh

But for the “sourcetyppe=f5:bigip:crond” as below that the time stample template does not in place _CROND[16204]: (root) CMD (nice -n 19 ionice -c 3 /usr/sbin/logrotate -s /var/lib/logrotate-monitors.status /etc/monitors/monitors_logrotate.conf) CROND[12108]: (root) CMD (nice -n 19 ionice -c 3 /usr/sbin/logrotate -s /var/lib/logrotate-adm.status /etc/adm/adm_logrotate.conf) CROND[16203]: (root) CMD (/usr/lib64/sa/sa1 1 1) CROND[12107]: (root) CMD (nice -n 19 ionice -c 3 /usr/share/ts/bin/asmlogrotate)

When try use of “template('$(format-json *)');”, it does not process anything at all. It seem like does not process any template at all.

Below are the outcome from RSYSLOG, which indicated time stamp is there _2023-03-05T06:01:02+00:00 XXXXXXX notice run-parts(/etc/cron.hourly)[16547]: finished avr_run_scheduled_reports 2023-03-05T05:25:01+00:00 XXXXXXX info CROND[14709]: (root) CMD (nice -n 19 ionice -c 3 /usr/sbin/logrotate -s /var/lib/logrotate-bwafconf.status /etc/bwafconf/bwafconf_logrotate.conf) 2023-03-05T05:25:01+00:00 XXXXXXX info CROND[14708]: (root) CMD (nice -n 19 ionice -c 3 /usr/sbin/logrotate -s /var/lib/logrotate-monitors.status /etc/monitors/monitors_logrotate.conf) 2023-03-05T04:05:01+00:00 XXXXXXX info CROND[10487]: (root) CMD (nice -n 19 ionice -c 3 /usr/sbin/logrotate -s /var/lib/logrotate-adm.status /etc/adm/adm_logrotate.conf) 2023-03-05T04:01:03+00:00 XXXXXXX notice run-parts(/etc/cron.hourly)[10109]: finished datasyncd_logrotate 2023-03-05T02:51:01+00:00 XXXXXXX info CROND[6037]: (root) CMD (/usr/bin/diskmonitor) 2023-03-05T02:26:01+00:00 XXXXXXX info CROND[4790]: (syscheck) CMD (/usr/bin/systemcheck -q)

Whereby noticed that when enabled the rawmsg, the server are recorded in VMWare folder du -h | grep ukfararkt1dev2lb 36K ./nix:syslog/xxxxxxx01 132K ./nix:syslog/xxxxxxx01 28K ./vmware:esxlog:crond/xxxxxxx01 28K ./vmware:esxlog:crond/xxxxxxx01 280K ./f5:bigip:syslog/xxxxxxx01 260K ./f5:bigip:syslog/xxxxxxx01

Below the contained

_cat 2023-03-05-archive.log

<78>1 2023-03-05T07:56:01.000+00:00 d4c6fde90063 @syslog-ng - - - {"fields":{"sc4s_vendor":"vmware","sc4s_syslog_severity":"info","sc4s_syslog_facility":"cron","sc4s_product":"vsphere","sc4s_class":"esx"},"RAWMSG":"<78>Mar 5 07:56:01 XXXXXXX01 info CROND[18580]: (syscheck) CMD (/usr/bin/system_check -q)","PROGRAM":"CROND","PID":"18580","MESSAGE":"(syscheck) CMD (/usr/bin/system_check -q)","LEGACY_MSGHDR":"CROND[18580]: ","HOST_FROM":"xx.xx.xx.xx","HOST":"xxxxxxx01",".splunk":{"sourcetype":"vmware:esxlog:crond","source":"vmware:esxlog:crond","sc4s_template":"t_5424_hdr_sdata_compact","index":"infraops"},".netsource":{"sc4s_vendor_product":"f5_bigip","sc4s_vendor":"f5","sc4s_product":"bigip"},".metadata":{"header":{"log_level":"info"}},".app":{"name":"app-lp-global_archive"},"._TAGS":"wireformat:rfc,wireformat:rfc3164,source_identified,wireformat:rfc3164_wlevel,vps,ns_vendor:f5,ns_product:bigip,.app.app-vps-test-f5_bigip,.app.app-almost-syslogz-wlevelword,.app.app-syslog-vmware_vsphere-pgm,.app.app-lp-global_archive,.source.s_DEFAULT"} <78>1 2023-03-05T07:58:01.000+00:00 d4c6fde90063 @syslog-ng - - - {"fields":{"sc4s_vendor":"vmware","sc4s_syslog_severity":"info","sc4s_syslog_facility":"cron","sc4s_product":"vsphere","sc4s_class":"esx"},"RAWMSG":"<78>Mar 5 07:58:01 XXXXXXX01 info CROND[18654]: (syscheck) CMD (/usr/bin/system_check -q)","PROGRAM":"CROND","PID":"18654","MESSAGE":"(syscheck) CMD (/usr/bin/system_check -q)","LEGACY_MSGHDR":"CROND[18654]: ","HOST_FROM":"xx.xx.xx.xx","HOST":"xxxxxxx01",".splunk":{"sourcetype":"vmware:esxlog:crond","source":"vmware:esxlog:crond","sc4s_template":"t_5424_hdr_sdata_compact","index":"infraops"},".netsource":{"sc4s_vendor_product":"f5_bigip","sc4s_vendor":"f5","sc4s_product":"bigip"},".metadata":{"header":{"log_level":"info"}},".app":{"name":"app-lp-global_archive"},"._TAGS":"wireformat:rfc,wireformat:rfc3164,source_identified,wireformat:rfc3164_wlevel,vps,ns_vendor:f5,ns_product:bigip,.app.app-vps-test-f5_bigip,.app.app-almost-syslogz-wlevelword,.app.app-syslog-vmware_vsphere-pgm,.app.app-lp-global_archive,.source.s_DEFAULT"}_
Jo-Ng commented 1 year ago

PCAP Outout

No issues on time stamp and archive created in /apps/sc4s/archive/ and able to found in log file

<30>Jan 9 04:55:02 XXXXXXXXLB01.SC.NET info systemd[1]: Starting Cleanup of Temporary Directories... <77>Jan 9 05:01:01 XXXXXXXXLB02.SC.NET notice run-parts(/etc/cron.hourly)[1765]: starting autodosd_logrotate Can't display time stamp, and when enable archive function in env_file, didn't see any archive created in /apps/sc4s/archive at all <78>Jan 9 04:54:01 XXXXXXXXLB02.SC.NET info CROND[1428]: (syscheck) CMD (/usr/bin/system_check -q) <78>Jan 9 04:54:01 XXXXXXXXLB01.SC.NET info CROND[23119]: (syscheck) CMD (/usr/bin/system_check -q) Please advise
Jo-Ng commented 1 year ago

any update on this?

Jo-Ng commented 1 year ago

It seems like due to the configuration provided early are the troublemaker of the issues. Question 1) What is the correct configuration’s file that shall reside on /apps/sc4s/local/config/app_parsers/ for F5? 2) How to ensure the correct sourcetype are displayed when use on item 1 file? (Compared sourcetype on Item 1&2 vs 3&4) 3) Is that a need to hardcode as defined on the Parser Configuration (https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/F5/bigip/) ? 4) What the use of “app-syslog-f5_bigip.conf” as I found it resided on GitHub. Anything that can be use to exchange this? (https://github.com/splunk/splunk-connect-for-syslog/blob/main/package/etc/conf.d/conflib/syslog/app-syslog-f5_bigip.conf)

Echo command use in the test $ echo "<78>date +\"%b %d %H:%M:%S\"xxxxxxxLB02 info CROND[16204]: (root) CMD (nice -n 19 ionice -c 3 /usr/sbin/logrotate -s /var/lib/logrotate-monitors.status /etc/monitors/monitors_logrotate.conf)" > /dev/udp/10.10.10/514 $ echo "<45>date +\"%b %d %H:%M:%S\"xxxxxxxlb01 sshd[16287]: pam_unix(sshd:session): session closed for user svc.netautorw.001" > /dev/udp/10.10.10/514 $ echo "<78>date +\"%b %d %H:%M:%S\" xxxxxxxAA info CROND[16202]: (root) CMD (/usr/lib64/sa/sa1 1 1)" > /dev/udp/10.10.10/514

Parser files involved _$ cat app-postfilterbigip.conf application app-postfilter_bigip[sc4s-postfilter] { filter { host("xxxxxxx2lb*" type(glob)) or host("xxxxxxxAA ") or host('10.10.10.230') and "$PROGRAM" eq "CROND" }; parser { app-postfilter_bigip() }; };

block parser app-postfilter_bigip() { channel { rewrite { r_set_splunk_dest_default( index('sc4s_multi_sec_int_gdc-w') source('f5:bigip') sourcetype('f5:bigip:$(lowercase ${PROGRAM})') vendor("f5") product('bigip') class('crond') ); }; }; };

_$ cat app-vps-f5bigip.conf

/opt/sc4s/local/config/app-parsers/app-vps-f5_bigip.conf

File name provided is a suggestion it must be globally unique

application app-vps-test-f5_bigip[sc4s-vps] { filter { host("xxxxxxxlb*" type(glob)) or host("xxxxxxxAA") or host('10.10.10.230') }; parser { p_set_netsource_fields( vendor('f5') product('bigip') ); }; };

app-syslog-f5_bigip.conf Refer to this link : (https://github.com/splunk/splunk-connect-for-syslog/blob/main/package/etc/conf.d/conflib/syslog/app-syslog-f5_bigip.conf)

Results of test

1) Use of app-postfilter_bigip.conf Time Stamp/header: ALL missing

2) app-postfilter_bigip.conf + app-syslog-f5_bigip.conf Time Stamp / Header: ALL missing

info CROND[16204]: (root) CMD (nice -n 19 ionice -c 3 /usr/sbin/logrotate -s /var/lib/logrotate-monitors.status /etc/monitors/monitors_logrotate.conf) info CROND[16204]: (root) CMD (nice -n 19 ionice -c 3 /usr/sbin/logrotate -s /var/lib/logrotate-monitors.status /etc/monitors/monitors_logrotate.conf) sshd[16287]: pam_unix(sshd:session): session closed for user svc.netautorw.001

3) app-vps-f5_bigip.conf + app-syslog-f5_bigip.conf – Working Time Stamp/header: All showing as config parameters

2023-03-30T12:55:50+00:00 xxxxxxxlb01 sshd[16287]: pam_unix(sshd:session): session closed for user svc.netautorw.001 2023-03-30T12:55:39+00:00 xxxxxxxlb02 info CROND[16204]: (root) CMD (nice -n 19 ionice -c 3 /usr/sbin/logrotate -s /var/lib/logrotate-monitors.status /etc/monitors/monitors_logrotate.conf) 2023-03-30T12:55:33+00:00 xxxxxxxAA info CROND[16202]: (root) CMD (/usr/lib64/sa/sa1 1 1) 2023-03-30T12:55:24+00:00 xxxxxxxAA info CROND[16204]: (root) CMD (nice -n 19 ionice -c 3 /usr/sbin/logrotate -s /var/lib/logrotate-monitors.status /etc/monitors/monitors_logrotate.conf)

4) app-vps-f5_bigip.conf – working Time Stamp/header: All showing as config parameters

Jo-Ng commented 1 year ago

FYI. the "app-bigip-postfilter" given are based on "F5 BigIP CROND events wrongly assigned as VMWare #1969" https://github.com/splunk/splunk-connect-for-syslog/issues/1969

rjha-splunk commented 1 year ago

Hi @Jo-Ng Please find answers to the questions below: What is the correct configuration’s file that shall reside on /apps/sc4s/local/config/app_parsers/ for F5? Anything which you need to override will reside here, 2 postfilters for the same source cant be used , you need to use different topic like sc4s-finalfilter (not recommended we recommend to use loop to have all operation in single file). How to ensure the correct sourcetype are displayed when use on item 1 file? (Compared sourcetype on Item 1&2 vs 3&4) Use one file , cover all the corner cases and dont confuse the parser, either use post filter or splunk_metadata.csv(preferred if you have the right key). Is that a need to hardcode as defined on the Parser Configuration (https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/F5/bigip/) ? Yes it helps SC4S to identify the source when there are not much difference in vendor logging pattern What the use of “app-syslog-f5_bigip.conf” as I found it resided on GitHub. Anything that can be use to exchange this? (https://github.com/splunk/splunk-connect-for-syslog/blob/main/package/etc/conf.d/conflib/syslog/app-syslog-f5_bigip.conf) This is default parser , if configuration is correct at vps level , you dont need any custom configuration and sc4s will assign right metadata to event.

Now what we need to help you here: please provide each log one sample, let us know what should be the value ( rather telling what is not right as it is confusing :) ) , we will help you to write a parser which cover all these cases.

Jo-Ng commented 1 year ago

when you use on below "echo" Echo command use in the test _$ echo "<78>date +\"%b %d %H:%M:%S\"xxxxxxxLB02 info CROND[16204]: (root) CMD (nice -n 19 ionice -c 3 /usr/sbin/logrotate -s /var/lib/logrotate-monitors.status /etc/monitors/monitors_logrotate.conf)" > /dev/udp/10.10.10/514 $ echo "<45>date +\"%b %d %H:%M:%S\"xxxxxxxlb01 sshd[16287]: pamunix(sshd:session): session closed for user svc.netautorw.001" > /dev/udp/10.10.10/514 $ echo "<78>date +\"%b %d %H:%M:%S\" xxxxxxxAA info CROND[16202]: (root) CMD (/usr/lib64/sa/sa1 1 1)" > /dev/udp/10.10.10/514

Events showed. But SC4S standard are removing the time stamp from the events where original message is as "Mar 5 07:56:01 _XXXXXXX01 info CROND[18580]: (syscheck) CMD (/usr/bin/systemcheck -q)"

Our end will want to establish the time stamp like what RSYSLOG are display to prevent any re-work on use case due to the parameters. hence we use below format to establish back the time stamp include the header In place below template cat /apps/sc4s/local/config/destinations/t_custom_time_f5bigip.conf template t custom_time_f5_bigip { template('${YEAR}-${MONTH}-${DAY}T${HOUR}:${MIN}:${SEC}${TZ} ${HOST} ${MSGHDR}${MESSAGE}'); };

so far it work for most LB events except those CROND events that link from early case ("F5 BigIP CROND events wrongly assigned as VMWare https://github.com/splunk/splunk-connect-for-syslog/issues/1969"

_$ echo "<78>date +\"%b %d %H:%M:%S\" xxxxxxxAA info CROND[16202]: (root) CMD (/usr/lib64/sa/sa1 1 1)" > /dev/udp/10.10.10/514 $ echo "<78>date +\"%b %d %H:%M:%S\"xxxxxxxLB02 info CROND[16204]: (root) CMD (nice -n 19 ionice -c 3 /usr/sbin/logrotate -s /var/lib/logrotate-monitors.status /etc/monitors/monitorslogrotate.conf)" > /dev/udp/10.10.10/514

Hope this clear.

Jo-Ng commented 1 year ago

ANy update?

Jo-Ng commented 1 year ago

@rjha-splunk Any update? It been 6 weeks now.

Jo-Ng commented 1 year ago

@rjha-splunk are you guys working on this?

rjha-splunk commented 1 year ago

We will post update on this asap, apologies for the delay on this.

rjha-splunk commented 1 year ago

Please update the parser to following it will add timestamp in rfc5164 mode:

application app-bigip-postfilter[sc4s-postfilter] {
 filter {
        host("test*" type(glob))
       and "$PROGRAM" eq "CROND"
    };
    parser { app-bigip-postfilter() };
};

block parser app-bigip-postfilter() {
     channel {
        rewrite {
            r_set_splunk_dest_default(
                index('netops')
                source('f5:bigip')
                sourcetype('f5:bigip:$(lowercase ${PROGRAM})')
                vendor("f5")
                product('bigip')
                class('crond')
                template('t_5424_hdr_sdata_msg')
            );
        };
   };
};

after parsing the event will look like following: 2023-01-09T04:52:01.000+00:00 hosttest CROND 23056 - meta sequenceId="3" CMD (/usr/bin/system_check -q)

Jo-Ng commented 1 year ago

1) The additional parameters on the message that contained of “ meta sequenceId* “ is not acceptable. As this meant the events contained has been modified and no longer original 2) With the “app-bigip-postfilter” parser given, is this meant the “app-vps-f5_bigip.conf” are not require at all? As based on the site the filter shall be there. https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/F5/bigip/ 3) Rest of the parsers does not need to specific the actual mapping on the parser on “index('netops')” (just use as it), except this need mapped to actual specific “index(sc4s_multi_secxxxx). What the logic behind? 4) For the events format, our end just need point to, then we will get the events will the time stamp and header that we need. But why this F5 Parser need included in the “app-bigip-postfilter” parser” itself? Why? a. “/apps/sc4s/local/config/destinations” for the events template like time stamp or header (example below: template t_scb_custom_time_aruba { template('${YEAR}-${MONTH}-${DAY}T${HOUR}:${MIN}:${SEC}${TZ} ${YEAR} ${HOST} ${MSGHDR}${MESSAGE}'); }; b. /apps/sc4s/local/context/splunk_metadata.csv

aruba_ap,index,sc4s_multi_sec_int_gdc-w aruba_ap,sc4s_template,t_scb_custom_time_aruba 5) The time format that we want is as below. IF I added into the Parser given, no F5 events will be display at Portal. Could you please help what could cause it? * Use your time stamp events will showed

application app-bigip-postfilter[sc4s-postfilter] { filter { host("xxxx2lb" type(glob)) or host("jomoni") or host('10.10.10.230') and "$PROGRAM" eq "CROND" }; parser { app-bigip-postfilter() }; };

block parser app-bigip-postfilter() { channel { rewrite { r_set_splunk_dest_default( index('sc4s_multisec ') source('f5:bigip') sourcetype('f5:bigip:$(lowercase ${PROGRAM})') vendor("f5") product('bigip') class('crond') template('${YEAR}-${MONTH}-${DAY}T${HOUR}:${MIN}:${SEC}${TZ} ${HOST} ${MSGHDR}${MESSAGE}') ); }; }; };

rjha-splunk commented 1 year ago

Thanks @Jo-Ng for the comment, let me review it again and post the updated solution. splunk metadata plus context templates are first in line after oob parsers so it works in other cases but not here.

rjha-splunk commented 1 year ago
application app-bigip-postfilter[sc4s-postfilter] {
 filter {
        host("ukfar*" type(glob))
       and "$PROGRAM" eq "CROND"
    };
    parser { app-bigip-postfilter() };
};

block parser app-bigip-postfilter() {
     channel {
        rewrite {
            r_set_splunk_dest_default(
                index('netops')
                source('f5:bigip')
                sourcetype('f5:bigip:$(lowercase ${PROGRAM})')
                vendor("f5")
                product('bigip')
                class('crond')
                template('t_everything')
            );
        };
   };
};

The template t_everything will do exactly what are you looking for , the format which you are trying to implement will not work, PFB answers to the question as well:

The additional parameters on the message that contained of “ meta sequenceId* “ is not acceptable. As this meant the events contained has been modified and no longer original : _template teverything will remove this bug( i am calling this bug because its a known issue in syslog-ng version)

With the “app-bigip-postfilter” parser given, is this meant the “app-vps-f5_bigip.conf” are not require at all? As based on the site the filter shall be there. https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/F5/bigip/ Rest of the parsers does not need to specific the actual mapping on the parser on “index('netops')” (just use as it), except this need mapped to actual specific “index(sc4s_multi_secxxxx). What the logic behind? It is used by many customers ..for them its mostly plug and play with given parsers and key generated by this parser can be used along with splunk_metadata.csv

For the events format, our end just need point to, then we will get the events will the time stamp and header that we need. But why this F5 Parser need included in the “app-bigip-postfilter” parser” itself? Why? a. “/apps/sc4s/local/config/destinations” for the events template like time stamp or header (example below: template t_scb_custom_time_aruba { template('${YEAR}-${MONTH}-${DAY}T${HOUR}:${MIN}:${SEC}${TZ} ${YEAR} ${HOST} ${MSGHDR}${MESSAGE}'); }; b. /apps/sc4s/local/context/splunk_metadata.csv _they are working as designed and built, it is working in first case because it follows the sequence OOB parser > splunkmetadata(context and config) > post filter .

Jo-Ng commented 1 year ago

Hi, you may mixed up the 2 & 3 questions

  1. With the “app-bigip-postfilter” parser given, is this meant the “app-vps-f5_bigip.conf” are not require at all? As based on the site the filter shall be there. https://splunk.github.io/splunk-connect-for-syslog/main/sources/vendor/F5/bigip/

3.Rest of the parsers does not need to specific the actual mapping on the parser on “index('netops')” (just use as it), except this need mapped to actual specific “index(sc4s_multi_secxxxx). What the logic behind? What I meant is rest of parser we just use the “index('netops')” even custom parser that provided by Splunk PS. and we just need make changed on the /apps/sc4s/local/context/splunk_metadata.csv with below f5_bigip,index,sc4s_multi_sec_int_gdc-w Where this F5 parser can't use the setting that set on splunk_metadata?

I will test the "t_everything". take note that other parsers are all use of configuration that set on /apps/sc4s/local/config/destinations/, but F5 are different again...

I may only respond next week as I'm away

Jo-Ng commented 1 year ago

@rjha-splunk Below the output use on the template('t_everything')

echo echo "<78>date +\"%b %d %H:%M:%S\" server1 info CROND[23119]: (syscheck) CMD (/usr/bin/system_check -q)" > /dev/udp/10.10.10.10/514

Output 2023-05-10T08:10:31.000+00:00 server1 CROND[23119]: (syscheck) CMD (/usr/bin/system_check -q)

Issues

Output from RSYSLOG which our end want and also what the original tcpdump received. 2023-05-10T07:01:01+00:00 serverx info CROND[10028]: (root) CMD (run-parts /etc/cron.hourly)

Please advice the above query.

rjha-splunk commented 1 year ago

i will check this and get back

rjha-splunk commented 1 year ago

@Jo-Ng i was not able to replicate time difference in my system , PFA screenshot as well as i have updated the parser to take care of other use case.

application app-bigip-postfilter[sc4s-postfilter] {
 filter {
        host("test*" type(glob))
       and "$PROGRAM" eq "CROND"
    };
    parser { app-bigip-postfilter() };
};

block parser app-bigip-postfilter() {
     channel {
        rewrite {
            r_set_splunk_dest_default(
                index('netops')
                source('f5:bigip')
                sourcetype('f5:bigip:$(lowercase ${PROGRAM})')
                vendor("f5")
                product('bigip')
                class('crond')
                template('t_msg_only')
            );
        };
       rewrite {
          set("${ISODATE} ${HOST} ${LEVEL} ${LEGACY_MSGHDR}${MESSAGE}" value("MESSAGE"));

       };
   };
};

Screenshot 2023-05-11 at 14 04 09

Jo-Ng commented 1 year ago

@rjha-splunk Your time different actually display the time different. Extra 000 The RSYSLOG is 2023-05-10T07:01:01+00:00 serverx info CROND Your parser showed is 2023-01-09T09:01:01.000+00:00

the template that suitable to the RSYSLOG is as below:- template('${YEAR}-${MONTH}-${DAY}T${HOUR}:${MIN}:${SEC}${TZ} ${HOST} ${MSGHDR}${MESSAGE}'); can this template fill into your parser?

rjha-splunk commented 1 year ago

@Jo-Ng Ofcourse you can do that, the parser was easily tuneable as per requirement:

application app-bigip-postfilter[sc4s-postfilter] {
 filter {
        host("test*" type(glob))
       and "$PROGRAM" eq "CROND"
    };
    parser { app-bigip-postfilter() };
};

block parser app-bigip-postfilter() {
     channel {
        rewrite {
            r_set_splunk_dest_default(
                index('netops')
                source('f5:bigip')
                sourcetype('f5:bigip:$(lowercase ${PROGRAM})')
                vendor("f5")
                product('bigip')
                class('crond')
                template('t_msg_only')
            );
        };
       rewrite {
          set('${YEAR}-${MONTH}-${DAY}T${HOUR}:${MIN}:${SEC}${TZ} ${HOST} ${LEVEL} ${MSGHDR}${MESSAGE}' value("MESSAGE"));

       };
   };
};

Screenshot 2023-05-11 at 20 20 50