Closed a3ilson closed 3 years ago
Apologies, the holiday season rush and end-of-year items have gotten hold of me. I will update my install and re-generate the dashboard as soon as I can
@revere521 - no worries and no rush. I apologize for the frequent breaking changes but this should be the last of them (now compliant with ECS) and hopefully just enhancements and updates in the future (more stability). Thanks again!
I need to recheck the config, because the last config changes broke the grok:
[2020-12-14T20:12:58,897][WARN ][logstash.outputs.elasticsearch][main][bad9c287e837e6b0c0c966810c3fa2ce686b120079f0491f0c00cf79246b8b0f] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pfelk-haproxy-2020.12.14", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x7b1c7d55>], :response=>{"index"=>{"_index"=>"pfelk-haproxy-2020.12.14", "_type"=>"_doc", "_id"=>"wMLiYnYBdc5AuXopfNKQ", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [haproxy.timestamp] of type [date] in document with id 'wMLiYnYBdc5AuXopfNKQ'. Preview of field's value: '14/Dec/2020:21:12:58.791'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [14/Dec/2020:21:12:58.791] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
If i have the haproxy data back in elastic, i will take care for the dashboard export.
@BeNeDeLuX - The GROK pattern is good. The error is related to the template and more specifically, the date/time format.
I amended the GROK pattern, to align the haproxy logs with their ECS equivalent (i.e. Filebeat [https://www.elastic.co/guide/en/beats/filebeat/current/exported-fields-haproxy.html]). There were a couple of fields that I was unable to to amend as I didn't have context as to what their ECS equivalent might be.
Give me a few and I'll amend the template for HAProxy...which should fix the error specified above.
urrent GROK pattern for HAPROXY fields
client.ip
- ECS
haproxy.timestamp
- Non-ECS but okay :boom:
haproxy.frontend_name
- ECS (Filebeat module)
haproxy.backend_name
- ECS (Filebeat module)
haproxy.server_name
- ECS (Filebeat module)
haproxy.time_request
- non-ECS :boom:
haproxy.time_queue
- ECS (Filebeat module)
haproxy.time_backend_connect
- ECS (Filebeat module)
haproxy.time_backend_response
- non-ECS :boom:
host.uptime
- ECS
http.response.status_code
- ECS
haproxy.bytes_read
- ECS (Filebeat module)
haproxy.http.request.captured_cookie
- ECS (Filebeat module)
haproxy.http.response.captured_cookie
- ECS (Filebeat module)
haproxy.temination_state
- ECS (Filebeat module)
haproxy.connections.active
- ECS (Filebeat module)
haproxy.connections.frontend
- ECS (Filebeat module)
haproxy.connections.backend
- ECS (Filebeat module)
haproxy.connections.server
- ECS (Filebeat module)
harpoxy.connections.retries
- ECS (Filebeat module)
haproxy.server_queue
- ECS (Filebeat module)
haproxy.backend_queue
- ECS (Filebeat module)
http.request.method
- ECS
user.name
- ECS
http.request.referrer
- ECS
http.mode
- ECS
http.version
- ECS
Let me know if the two non-ECS :boom: above can be changed to names that are compliant with ECS.
@BeNeDeLuX
Give this template (haproxy) a try which specifics the date/time from the above:
PUT _index_template/pfelk-haproxy
{
"version": 8,
"priority": 90,
"template": {
"settings": {
"index": {
"lifecycle": {
"name": "pfelk-ilm"
}
}
},
"mappings": {
"_routing": {
"required": false
},
"numeric_detection": false,
"dynamic_date_formats": [
"strict_date_optional_time",
"yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z",
"dd/MMM/yyyy:HH:mm:ss.Z"
],
"dynamic": true,
"_source": {
"excludes": [],
"includes": [],
"enabled": true
},
"date_detection": true,
"properties": {
"haproxy": {
"type": "object",
"properties": {
"server_name": {
"eager_global_ordinals": false,
"norms": false,
"index": true,
"store": false,
"type": "keyword",
"fields": {
"text": {
"type": "text"
}
},
"index_options": "docs",
"split_queries_on_whitespace": false,
"doc_values": true
},
"termination_state": {
"eager_global_ordinals": false,
"norms": false,
"index": true,
"store": false,
"type": "keyword",
"fields": {
"text": {
"type": "text"
}
},
"index_options": "docs",
"split_queries_on_whitespace": false,
"doc_values": true
},
"time_queue": {
"type": "long"
},
"bytes_read": {
"type": "long"
},
"mode": {
"type": "keyword"
},
"backend_queue": {
"type": "long"
},
"backend_name": {
"eager_global_ordinals": false,
"norms": false,
"index": true,
"store": false,
"type": "keyword",
"fields": {
"text": {
"type": "text"
}
},
"index_options": "docs",
"split_queries_on_whitespace": false,
"doc_values": true
},
"frontend_name": {
"eager_global_ordinals": false,
"norms": false,
"index": true,
"store": false,
"type": "keyword",
"fields": {
"text": {
"type": "text"
}
},
"index_options": "docs",
"split_queries_on_whitespace": false,
"doc_values": true
},
"http": {
"type": "object",
"properties": {
"request": {
"type": "object",
"properties": {
"captured_cookie": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
},
"response": {
"type": "object",
"properties": {
"captured_cookie": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
}
},
"server_queue": {
"type": "long"
},
"time_backend_connect": {
"type": "long"
},
"connections": {
"type": "object",
"properties": {
"retries": {
"type": "long"
},
"server": {
"type": "long"
},
"active": {
"type": "long"
},
"backend": {
"type": "long"
},
"frontend": {
"type": "long"
}
}
},
"timestamp": {
"type": "date"
}
}
}
}
}
},
"index_patterns": [
"pfelk-haproxy-*"
],
"composed_of": [
"pfelk-settings",
"pfelk-mappings-ecs"
],
"_meta": {
"description": "default haproxy indexes installed by pfelk",
"managed": true
}
}
e.g. took
14/Dec/2020:21:12:58.791
and mapped within the template asdd/MMM/yyyy:HH:mm:ss.Z
I would delete the current template (haproxy) then apply this one.
@a3ilson I updated to the latest versions in the repo, and have everything running beside snort.
Do I need to create a template for the snort index pattern? or do the snort fields reside inside the firewall index pattern?
No snort pattern needed, snort fields will confirm to ECS which is taken care of by the pfelk-mappings...
On Mon, Dec 21, 2020 at 11:21 revere521 notifications@github.com wrote:
@a3ilson https://github.com/a3ilson I updated to the latest versions in the repo, and have everything running beside snort.
Do I need to create a template for the snort index pattern? or do the snort fields reside inside the firewall index pattern?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/pfelk/pfelk/issues/219#issuecomment-749058201, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEA2HRZ73PRYCHFGMEGIFGDSV5YYPANCNFSM4UUEEUAQ .
@revere521 - I am also curious as to if you have any quirks with your setup utilizing the new configuration files? I'm trying to determine the issue for #224/#223...(i.e. to rule out pfSense causing the issue).
everything seems to be working fine so far, but i'tl be collecting data over the course of the day while i recreate everything.
in the DHCP index, i am seeing a _grokparsefailure for the log message:
<187>Dec 21 12:18:14 dhcpd: uid lease 192.168.1.229 for client a0:48:1c:a3:6b:69 is duplicate on 192.168.1.0/24
but that has existed since the very begging
Otherwise, looks fine. I'm also using maxmind for geolocation if that helps
Are the GROK failures all similar in syntax? I'll work on the one provided (i.e. adding a GROK pattern). Also do any of your dashboards work? Three issues (2 here and on the docker repo) all with similar issues and wondering if it is linked to pfSense?
the Firewall, DHCP and Unbound dashes all work just fine. That uid duplicate message is all the same syntax, its just a strange DHCP error. some devices on my network must be querying DHCP to get an IP when they already have an IP
I made a tweak to the GROK which should parse out anything that doesn't match...after I've tested/validate, I'll post/update.
Your particular message would be nice to breakout (mac and IP)....do you have any others that start with uid
(immediately after the dhcpd:)
here are several of those messages, it appears that all messages with the "dhcpd: uid lease" statement are all this duplicate message.
Do you have any guidance for snort? i don't see any fields in the pfelk-firewall-* index with snort in the name?
<187>Dec 21 13:59:39 dhcpd: uid lease 192.168.1.226 for client 00:05:cd:7a:a0:06 is duplicate on 192.168.1.0/24
<187>Dec 21 13:59:39 dhcpd: uid lease 192.168.1.226 for client 00:05:cd:7a:a0:06 is duplicate on 192.168.1.0/24
<187>Dec 21 13:59:05 dhcpd: uid lease 192.168.1.231 for client b8:27:eb:6e:71:db is duplicate on 192.168.1.0/24
<187>Dec 21 13:59:05 dhcpd: uid lease 192.168.1.231 for client b8:27:eb:6e:71:db is duplicate on 192.168.1.0/24
<187>Dec 21 13:35:49 dhcpd: uid lease 192.168.1.223 for client 74:da:38:8b:f5:e7 is duplicate on 192.168.1.0/24
<187>Dec 21 13:35:49 dhcpd: uid lease 192.168.1.223 for client 74:da:38:8b:f5:e7 is duplicate on 192.168.1.0/24
<187>Dec 21 13:18:15 dhcpd: uid lease 192.168.1.229 for client a0:48:1c:a3:6b:69 is duplicate on 192.168.1.0/24
<187>Dec 21 13:18:15 dhcpd: uid lease 192.168.1.229 for client a0:48:1c:a3:6b:69 is duplicate on 192.168.1.0/24
<187>Dec 21 12:59:39 dhcpd: uid lease 192.168.1.226 for client 00:05:cd:7a:a0:06 is duplicate on 192.168.1.0/24
<187>Dec 21 12:59:39 dhcpd: uid lease 192.168.1.226 for client 00:05:cd:7a:a0:06 is duplicate on 192.168.1.0/24
<187>Dec 21 12:59:05 dhcpd: uid lease 192.168.1.231 for client b8:27:eb:6e:71:db is duplicate on 192.168.1.0/24
<187>Dec 21 12:59:05 dhcpd: uid lease 192.168.1.231 for client b8:27:eb:6e:71:db is duplicate on 192.168.1.0/24
I just updated the GROK pattern which will parse it out as uid lease 192.168.1.223 for client 74:da:38:8b:f5:e7 is duplicate on 192.168.1.0/24
Is it worth adding a uid filter to parse these out (e.g. lease.ip, client.mac, tag w/duplicate...I'm curious but suspect they are VMs/Docker and/or routed through another switch/router?
I would call it low priority, i'm not exactly sure whats happening but they seem to be transient errors on my home network. I haven't really devoted time to figuring out the cause.
@revere521 - snort
snort
object from 10-apps.conf. This will parse out snort events which aligns with ECS (i.e. ECS compliant). Hopefully, that clarifies everything but essentially, the previous snort template was absorbed within the already codified ECS. However, if you not fields that deviate, let me know and we can align them under an object which is similar to what I did the those user the pf
object (e.g. firewall specific values that did not align within ECS.
I see that this is a standardized formatting for field names. Its going to take me a bit to try to figure out how this works....
Let me know if you need any help. All the fields should be the same minus the preceding snort object.
to make sure i'm not chasing a wild goose:
Correct.
pfelk-firewall-*, pfelk-snort-*, pfelk-squid-*, pfelk-unbound-*
patterns
pfelk-settings, pfelk-mappings-ecs
components or essentially applies the settings and field mappingsSee screenshots:
The first screenshot depicts where the index components (settings/mappings are referenced)
The second is creating the snort specific index pattern (Kibana)
The third is the settings (standardization) for the next snort index pattern (allows others to build/modify and share)
Once created, I export all elements to include the index pattern...I named the saved objects as pfelk-firewall-*, Firewall -*
etc... this allows you to easily navigate to the saved objects, search and retrieve all objects with said naming scheme.
So for all snort related objects, I would recommend preceding all with Snort -
so that all objects may be retrieved by searching "snort" and similar to the following image illustrating unbound:
ok, good deal...i think i'm back in gear now.
i'll have the dashboard done with all the visualizations etc. exported later this evening. I'm assuming i'm going to export the index template this time, so other won't have to manually create it
Correct and thanks!
On Mon, Dec 21, 2020 at 15:28 revere521 notifications@github.com wrote:
ok, good deal...i think i'm back in gear now.
i'll have the dashboard done with all the visualizations etc. exported later this evening. I'm assuming i'm going to export the index template this time, so other won't have to manually create it
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/pfelk/pfelk/issues/219#issuecomment-749181082, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEA2HR64J2O7LE3LRKT6O43SV6VWZANCNFSM4UUEEUAQ .
Ok, should be all set now – updated and uploaded
From: Andrew notifications@github.com Sent: Monday, December 21, 2020 3:30 PM To: pfelk/pfelk pfelk@noreply.github.com Cc: revere521 revere521@hotmail.com; Mention mention@noreply.github.com Subject: Re: [pfelk/pfelk] Broken Dashboards with Latest Index Templates (#219)
Correct and thanks!
On Mon, Dec 21, 2020 at 15:28 revere521 notifications@github.com<mailto:notifications@github.com> wrote:
ok, good deal...i think i'm back in gear now.
i'll have the dashboard done with all the visualizations etc. exported later this evening. I'm assuming i'm going to export the index template this time, so other won't have to manually create it
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/pfelk/pfelk/issues/219#issuecomment-749181082, or unsubscribe https://github.com/notifications/unsubscribe-auth/AEA2HR64J2O7LE3LRKT6O43SV6VWZANCNFSM4UUEEUAQ .
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/pfelk/pfelk/issues/219#issuecomment-749181737, or unsubscribehttps://github.com/notifications/unsubscribe-auth/AFTYA6OGKHZWG5JNATG7PP3SV6V5FANCNFSM4UUEEUAQ.
I need to recheck the config, because the last config changes broke the grok:
[2020-12-14T20:12:58,897][WARN ][logstash.outputs.elasticsearch][main][bad9c287e837e6b0c0c966810c3fa2ce686b120079f0491f0c00cf79246b8b0f] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pfelk-haproxy-2020.12.14", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x7b1c7d55>], :response=>{"index"=>{"_index"=>"pfelk-haproxy-2020.12.14", "_type"=>"_doc", "_id"=>"wMLiYnYBdc5AuXopfNKQ", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [haproxy.timestamp] of type [date] in document with id 'wMLiYnYBdc5AuXopfNKQ'. Preview of field's value: '14/Dec/2020:21:12:58.791'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [14/Dec/2020:21:12:58.791] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
If i have the haproxy data back in elastic, i will take care for the dashboard export.
I updated the dashboard based on the revised changes (ECS). When you have a moment, please test/validate. It's posted here
@a3ilson Sorry for the delay and that you had to be active in that case of the HAProxy data. I installed a new machine with the git checkout (7d5bed7) 2h ago. The Error is still the same:
[2020-12-26T20:26:51,090][WARN ][logstash.outputs.elasticsearch][main][738ca59c22e4c4e7492fa1580ff45aa25298ae32f26b0ca6d8c0b94d1f2670d5] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pfelk-haproxy-2020.12", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x3a78ff4e>], :response=>{"index"=>{"_index"=>"pfelk-haproxy-2020.12", "_type"=>"_doc", "_id"=>"fXi7oHYBk2RZtOPXf3BR", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [haproxy.timestamp] of type [date] in document with id 'fXi7oHYBk2RZtOPXf3BR'. Preview of field's value: '26/Dec/2020:21:26:51.011'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [26/Dec/2020:21:26:51.011] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
[2020-12-26T20:26:51,832][WARN ][logstash.outputs.elasticsearch][main][738ca59c22e4c4e7492fa1580ff45aa25298ae32f26b0ca6d8c0b94d1f2670d5] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pfelk-haproxy-2020.12", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x37082089>], :response=>{"index"=>{"_index"=>"pfelk-haproxy-2020.12", "_type"=>"_doc", "_id"=>"kXi7oHYBk2RZtOPXgnA4", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [haproxy.timestamp] of type [date] in document with id 'kXi7oHYBk2RZtOPXgnA4'. Preview of field's value: '26/Dec/2020:21:26:51.707'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [26/Dec/2020:21:26:51.707] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
[2020-12-26T20:26:52,950][WARN ][logstash.outputs.elasticsearch][main][738ca59c22e4c4e7492fa1580ff45aa25298ae32f26b0ca6d8c0b94d1f2670d5] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pfelk-haproxy-2020.12", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x7c7d1c2>], :response=>{"index"=>{"_index"=>"pfelk-haproxy-2020.12", "_type"=>"_doc", "_id"=>"n3i7oHYBk2RZtOPXhnCU", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [haproxy.timestamp] of type [date] in document with id 'n3i7oHYBk2RZtOPXhnCU'. Preview of field's value: '26/Dec/2020:21:26:52.846'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [26/Dec/2020:21:26:52.846] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
[2020-12-26T20:26:53,671][WARN ][logstash.outputs.elasticsearch][main][738ca59c22e4c4e7492fa1580ff45aa25298ae32f26b0ca6d8c0b94d1f2670d5] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pfelk-haproxy-2020.12", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x445556b4>], :response=>{"index"=>{"_index"=>"pfelk-haproxy-2020.12", "_type"=>"_doc", "_id"=>"pHi7oHYBk2RZtOPXiXBl", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [haproxy.timestamp] of type [date] in document with id 'pHi7oHYBk2RZtOPXiXBl'. Preview of field's value: '26/Dec/2020:21:26:53.555'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [26/Dec/2020:21:26:53.555] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
^C
@BeNeDeLuX - Give this a try:
PUT _component_template/pfelk-settings
{
"version": 8,
"template": {
"settings": {
"index": {
"mapping": {
"total_fields": {
"limit": "10000"
}
},
"refresh_interval": "5s"
}
},
"mappings": {
"_routing": {
"required": false
},
"numeric_detection": false,
"dynamic_date_formats": [
"strict_date_optional_time",
"yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z",
"dd/MMM/yyyy:HH:mm:ss.Z"
],
"dynamic": true,
"_source": {
"excludes": [],
"includes": [],
"enabled": true
},
"date_detection": true
}
},
"_meta": {
"description": "default settings for the pfelk indexes installed by pfelk",
"managed": true
}
}
Summary: I updated the haproxy template but overlooked the component template. I added the time format for haproxy within the pfelk-settings component template which should parse the date.
@a3ilson I updated according your last changes to the template and restarted logstash. But the error is still the same:
[2020-12-28T18:51:52,033][INFO ][org.logstash.beats.Server][main][Beats] Starting server on port: 5044
[2020-12-28T18:51:52,040][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-12-28T18:51:52,070][INFO ][logstash.inputs.udp ][main][pfelk-2] Starting UDP listener {:address=>"0.0.0.0:5141"}
[2020-12-28T18:51:52,079][INFO ][logstash.inputs.udp ][main][pfelk-1] Starting UDP listener {:address=>"0.0.0.0:5140"}
[2020-12-28T18:51:52,103][INFO ][logstash.inputs.udp ][main][pfelk-haproxy] Starting UDP listener {:address=>"0.0.0.0:5190"}
[2020-12-28T18:51:52,128][INFO ][logstash.inputs.udp ][main][pfelk-2] UDP listener started {:address=>"0.0.0.0:5141", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[2020-12-28T18:51:52,129][INFO ][logstash.inputs.udp ][main][pfelk-1] UDP listener started {:address=>"0.0.0.0:5140", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[2020-12-28T18:51:52,134][INFO ][logstash.inputs.udp ][main][pfelk-haproxy] UDP listener started {:address=>"0.0.0.0:5190", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[2020-12-28T18:51:52,238][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-12-28T18:51:54,021][WARN ][logstash.outputs.elasticsearch][main][738ca59c22e4c4e7492fa1580ff45aa25298ae32f26b0ca6d8c0b94d1f2670d5] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pfelk-haproxy-2020.12", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x3b0dc6b1>], :response=>{"index"=>{"_index"=>"pfelk-haproxy-2020.12", "_type"=>"_doc", "_id"=>"-4exqnYBk2RZtOPXSVMd", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [haproxy.timestamp] of type [date] in document with id '-4exqnYBk2RZtOPXSVMd'. Preview of field's value: '28/Dec/2020:19:51:53.507'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [28/Dec/2020:19:51:53.507] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
[2020-12-28T18:51:54,646][WARN ][logstash.outputs.elasticsearch][main][738ca59c22e4c4e7492fa1580ff45aa25298ae32f26b0ca6d8c0b94d1f2670d5] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pfelk-haproxy-2020.12", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x2f6aa706>], :response=>{"index"=>{"_index"=>"pfelk-haproxy-2020.12", "_type"=>"_doc", "_id"=>"_IexqnYBk2RZtOPXS1OS", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [haproxy.timestamp] of type [date] in document with id '_IexqnYBk2RZtOPXS1OS'. Preview of field's value: '28/Dec/2020:19:51:54.464'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [28/Dec/2020:19:51:54.464] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
The template is updated:
GET _component_template/pfelk-settings
Output:
{
"component_templates" : [
{
"name" : "pfelk-settings",
"component_template" : {
"template" : {
"settings" : {
"index" : {
"mapping" : {
"total_fields" : {
"limit" : "10000"
}
},
"refresh_interval" : "5s"
}
},
"mappings" : {
"_routing" : {
"required" : false
},
"numeric_detection" : false,
"dynamic_date_formats" : [
"strict_date_optional_time",
"yyyy/MM/dd HH:mm:ss Z||yyyy/MM/dd Z",
"dd/MMM/yyyy:HH:mm:ss.Z"
],
"dynamic" : true,
"_source" : {
"excludes" : [ ],
"includes" : [ ],
"enabled" : true
},
"date_detection" : true
}
},
"version" : 8,
"_meta" : {
"managed" : true,
"description" : "default settings for the pfelk indexes installed by pfelk"
}
}
}
]
}
All in all the timestamp format looks fine to the one you set in the settings. I don´t get the point...
Let me go back and brush up on the original solution (prior issue) but essentially, the error is indicating that it is unable to parse the field [haproxy.timestamp] as a date (type).
Let me dig into this in a little bit...fishing up a modification to the installer script (i.e. adding the dashboard installation).
@BeNeDeLuX - Alright...try downloading and installing the updated haproxy and pfelk-settings templates. I had the date format as dd/MMM/yyyy:HH:mm:ss.Z
where it should have been dd/MMM/yyyy:hh:mm:ss:SSS
@a3ilson I have no success so far :-/ Installed pfelk yesterday completely new. The error is still the same:
[2020-12-30T20:00:05,261][WARN ][logstash.outputs.elasticsearch][main][6b920fa6645a731a7a8ab58459b0bba082e60683b281f429d4f2d5d3b0cf2e3c] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pfelk-haproxy", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x4226f42>], :response=>{"index"=>{"_index"=>"pfelk-haproxy-000001", "_type"=>"_doc", "_id"=>"f1I8tXYBYLDEKXuDbhGN", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [haproxy.timestamp] of type [date] in document with id 'f1I8tXYBYLDEKXuDbhGN'. Preview of field's value: '30/Dec/2020:21:00:05.116'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [30/Dec/2020:21:00:05.116] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
[2020-12-30T20:00:09,398][WARN ][logstash.outputs.elasticsearch][main][6b920fa6645a731a7a8ab58459b0bba082e60683b281f429d4f2d5d3b0cf2e3c] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pfelk-haproxy", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x35ba2e69>], :response=>{"index"=>{"_index"=>"pfelk-haproxy-000001", "_type"=>"_doc", "_id"=>"pVI8tXYBYLDEKXuDfhG2", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [haproxy.timestamp] of type [date] in document with id 'pVI8tXYBYLDEKXuDfhG2'. Preview of field's value: '30/Dec/2020:21:00:08.913'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [30/Dec/2020:21:00:08.913] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
[2020-12-30T20:00:13,073][WARN ][logstash.outputs.elasticsearch][main][6b920fa6645a731a7a8ab58459b0bba082e60683b281f429d4f2d5d3b0cf2e3c] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pfelk-haproxy", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x577abd16>], :response=>{"index"=>{"_index"=>"pfelk-haproxy-000001", "_type"=>"_doc", "_id"=>"t1I8tXYBYLDEKXuDjREQ", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [haproxy.timestamp] of type [date] in document with id 't1I8tXYBYLDEKXuDjREQ'. Preview of field's value: '30/Dec/2020:21:00:12.918'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [30/Dec/2020:21:00:12.918] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
[2020-12-30T20:00:13,800][WARN ][logstash.outputs.elasticsearch][main][6b920fa6645a731a7a8ab58459b0bba082e60683b281f429d4f2d5d3b0cf2e3c] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pfelk-haproxy", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x448ccb08>], :response=>{"index"=>{"_index"=>"pfelk-haproxy-000001", "_type"=>"_doc", "_id"=>"u1I8tXYBYLDEKXuDjxHo", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [haproxy.timestamp] of type [date] in document with id 'u1I8tXYBYLDEKXuDjxHo'. Preview of field's value: '30/Dec/2020:21:00:13.662'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [30/Dec/2020:21:00:13.662] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
I will recheck the timestamp format with the syntax. May wee need to change to an capital "h" ?
H hour-of-day (0-23) number 0
h clock-hour-of-am-pm (1-12) number 12
--> https://docs.oracle.com/javase/8/docs/api/java/time/format/DateTimeFormatter.html
Side note: The template and all dashboards were successfully and automatically installed 👍
@BeNeDeLuX - Thanks and sorry again for the back and forth...I think we got it this time. I corrected the end (Z for SSS) but overlooked the capitol HH. I made the changes within the repo which now reflect dd/MMM/yyyy:HH:mm:ss:SSS
No need to reinstall... you can download and run the template script which should overwrite the previous templates. This can be done by:
wget https://raw.githubusercontent.com/pfelk/pfelk/master/pfelk-template-installer.sh
sudo chmod +x pfelk-template-installer.sh
sudo ./pfelk-template-installer.sh
Once accomplished, you'll need to restart logstash and maybe clear your indices. Additionally, you can check, to see if the changes took by navigating to mappings within both the haproxy template and pfelk-settings component as depicted below (you could also amend and forgo reinstalling the templates too).
@a3ilson No reason to be thankful - i glad that i can help! But it looks like the problem is not solved and still the same. I reinstalled pfELK again to be sure that the default settings are fine. Reinstall is no big deal, i just rollback to my snapshot (OS only) and install within 5 min. with the installer script 👍
[2021-01-03T12:26:04,107][WARN ][logstash.outputs.elasticsearch][main][302f438be8fd4126d75c9c599eedbe539e84453b5e85ea4e2d83a750ad58e75e] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pfelk-haproxy", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x2fc054e8>], :response=>{"index"=>{"_index"=>"pfelk-haproxy-000001", "_type"=>"_doc", "_id"=>"Hk42yHYB98Nn5pi0M_HK", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [haproxy.timestamp] of type [date] in document with id 'Hk42yHYB98Nn5pi0M_HK'. Preview of field's value: '03/Jan/2021:13:26:03.745'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [03/Jan/2021:13:26:03.745] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
[2021-01-03T12:26:04,907][WARN ][logstash.outputs.elasticsearch][main][302f438be8fd4126d75c9c599eedbe539e84453b5e85ea4e2d83a750ad58e75e] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pfelk-haproxy", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x4fbbe686>], :response=>{"index"=>{"_index"=>"pfelk-haproxy-000001", "_type"=>"_doc", "_id"=>"J042yHYB98Nn5pi0NvHq", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [haproxy.timestamp] of type [date] in document with id 'J042yHYB98Nn5pi0NvHq'. Preview of field's value: '03/Jan/2021:13:26:04.649'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [03/Jan/2021:13:26:04.649] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
[2021-01-03T12:26:14,938][WARN ][logstash.outputs.elasticsearch][main][302f438be8fd4126d75c9c599eedbe539e84453b5e85ea4e2d83a750ad58e75e] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"pfelk-haproxy", :routing=>nil, :_type=>"_doc"}, #<LogStash::Event:0x20c56316>], :response=>{"index"=>{"_index"=>"pfelk-haproxy-000001", "_type"=>"_doc", "_id"=>"Q042yHYB98Nn5pi0XvEW", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [haproxy.timestamp] of type [date] in document with id 'Q042yHYB98Nn5pi0XvEW'. Preview of field's value: '03/Jan/2021:13:26:14.538'", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"failed to parse date field [03/Jan/2021:13:26:14.538] with format [strict_date_optional_time||epoch_millis]", "caused_by"=>{"type"=>"date_time_parse_exception", "reason"=>"Failed to parse with all enclosed parsers"}}}}}}
Here are the settings:
I am at a loss on the subject, because the timestamp should fit so
Yikes! Alright... I'll devise an alternate method.
@BeNeDeLuX - Alright... I went through everything and believe I found the issue...the haproxy.timestamp filed did not contain the specific date/time format. Although the component template and haproxy template did... I sent the specific field to recognize the date/time format similar to our initial endeavor (#151).
I've updated the template and script... give it another go and thanks for your help
Issue should be resolved...reopen otherwise.
NOTE: the dashboard and visualizations require updating due to the ECS updates for haproxy.
Describe the bug The latest index templates broke the dashboards due to mappings and field name changes. These changes were necessary as the legacy index templates are depreciated. Additionally, the changes leveraged ECS formatting.
Installation method (manual and script):
Additional context
Custom index pattern ID
=id-firewall
Custom index pattern ID
=id-haproxy
Custom index pattern ID
=id-snort
Custom index pattern ID
=id-suricata
Custom index pattern ID
=id-unbound
Custom index pattern ID
=id-dhcp
Custom index pattern ID
=id-squid