Closed adri8n closed 6 days ago
Ok, I think the issue is with the /etc/syslog-ng/conf.d/sources/source_syslog
jinja2 template:
{%- if not vendor or not product %}
{%- if use_vpscache == True %}
if {
parser(p_vpst_cache);
};
{%- endif %}
if {
parser(vendor_product_by_source);
};
**{%- endif %}**
The conditional checks for a missing vendor or product, which I'm guessing is meant to then trigger a VPS cache check. However, the endif encompasses the parser(vendor_product_by_source);
line as well, which has the effect of bypassing this code path altogether.
My guess is that the endif should be moved up, right after the previous endif:
{%- if not vendor or not product %}
{%- if use_vpscache == True %}
if {
parser(p_vpst_cache);
};
{%- endif %}
**{%- endif %}**
if {
parser(vendor_product_by_source);
};
Pull request submitted: https://github.com/splunk/splunk-connect-for-syslog/pull/2457
A sample how that can be implemented with modern method
block parser app-dest-new-cef() {
channel {
parser {
add-contextual-data(
selector(filters("`syslog-ng-sysconfdir`/conf.d/local/context/vendor_product_by_source.conf")),
database("`syslog-ng-sysconfdir`/conf.d/local/context/vendor_product_by_source.csv")
ignore-case(yes)
prefix(".netsource.")
);
};
};
};
application app-dest-new-cef[sc4s-postfilter] {
filter {
tags(".source.s_INFOBLOX_NIOS_THREAT");
};
parser {
app-dest-new-cef();
};
};
Was the issue replicated by support? No
What is the sc4s version ? sc4s version=3.22.3
Is there a pcap available? No
Is the issue related to the environment of the customer or Software related issue? No
Is it related to Data loss, please explain ? Protocol? Hardware specs? No
Last chance index/Fallback index? No
Is the issue related to local customization? Yes
Do we have all the default indexes created? Yes
Describe the bug A clear and concise description of what the bug is. Have two unique listen ports defined in env_file:
Would like to use SC4S receive time instead of timestamp in event, so as a test set:
and
With default sources, parser(vendor_product_by_source) is called, so the sc4s_use_recv_time field is set and the timestamp send to Splunk Cloud is correct.
With the configuration above, I can't find a place where it is called in the config/code/log path, so timestamp never gets replaced.
If I add a custom parser, and call it, it works:
If I call it here it also works (but this is likely not the best place to do it):
Any guidance would be appreciated!
Happy to submit a pull request if I can determine where the appropriate place to call this would be.