pfelk / docker

Deploy pfelk with docker-compose
Apache License 2.0
56 stars 20 forks source link

No visualizations in Squid Dashboard #23

Closed SaiKiran449 closed 3 years ago

SaiKiran449 commented 3 years ago

Describe the bug I have deployed the pfelk through docker containers. I am able to receive the data from my pfsense to the elastic and I can see the indices created in Kibana and the data is being added to the index. However, I can't see the visualization in the Squid dashboard

To Reproduce Steps to reproduce the behavior:

  1. Install pfelk with docker
  2. Install the templates(component and index templates)
  3. Set the logstash UDP address([IP]:5140) as remote log server in pfsense
  4. Import the Dashboards into Kibana
  5. Open the Squid Dashboard

Screenshots Screenshot from 2021-01-08 15-14-51 Screenshot from 2021-01-08 15-15-07 Screenshot from 2021-01-08 15-14-06

Operating System (please complete the following information):

Elasticsearch, Logstash, Kibana (please complete the following information):

 - `docker-compose logs pfelk03`

es03 | {"type": "server", "timestamp": "2021-01-08T09:31:27,993Z", "level": "INFO", "component": "o.e.x.s.a.TokenService", "cluster.name": "es-docker-cluster", "node.name": "es03", "message": "refresh keys", "cluster.uuid": "XuOhpVCQ-4ePdZ68tELQ", "node.id": "u2PpmSdrSAmFsXOE7ZxWwg" } es03 | {"type": "server", "timestamp": "2021-01-08T09:31:28,126Z", "level": "INFO", "component": "o.e.x.s.a.TokenService", "cluster.name": "es-docker-cluster", "node.name": "es03", "message": "refreshed keys", "cluster.uuid": "XuOhpVCQ-4ePdZ68tELQ", "node.id": "u2PpmSdrSAmFsXOE7ZxWwg" } es03 | {"type": "server", "timestamp": "2021-01-08T09:31:28,199Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "es-docker-cluster", "node.name": "es03", "message": "added {{es01}{5xp6gCnEQvG8Ez3j5X_6og}{ClUVqoGoREuh12Y7eees-Q}{172.18.0.3}{172.18.0.3:9300}{cdhilmrstw}{ml.machine_memory=16646402048, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}}, term: 79, version: 1263, reason: ApplyCommitRequest{term=79, version=1263, sourceNode={es02}{3AmE-f8oSlepBeIuHRWaOQ}{UIdhHD5_TAOx_JkhP4rH6g}{172.18.0.4}{172.18.0.4:9300}{cdhilmrstw}{ml.machine_memory=16646402048, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}}", "cluster.uuid": "XuOhpVCQ-4ePdZ68tELQ", "node.id": "u2PpmSdrSAmFsXOE7ZxWwg" }

 - `docker-compose logs logstash`

logstash | [INFO ] 2021-01-08 09:31:40.606 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:5190", :receive_buffer_bytes=>"106496", :queue_size=>"2000"} logstash | [INFO ] 2021-01-08 09:31:40.607 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:5140", :receive_buffer_bytes=>"106496", :queue_size=>"2000"} logstash | [INFO ] 2021-01-08 09:31:40.610 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:5141", :receive_buffer_bytes=>"106496", :queue_size=>"2000"} logstash | [INFO ] 2021-01-08 09:31:40.653 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}


 - `docker-compose logs kibana`

kibana | {"type":"log","@timestamp":"2021-01-08T09:31:32Z","tags":["info","plugins","watcher"],"pid":10,"message":"Your basic license does not support watcher. Please upgrade your license."} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:32Z","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":10,"message":"Starting monitoring stats collection"} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:33Z","tags":["listening","info"],"pid":10,"message":"Server running at http://0:5601"} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["info","http","server","Kibana"],"pid":10,"message":"http server running at http://0:5601"} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["error","elasticsearch","data"],"pid":10,"message":"[version_conflict_engine_exception]: [task:endpoint:user-artifact-packager:1.0.0]: version conflict, document already exists (current version [9190])"} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["error","elasticsearch","data"],"pid":10,"message":"[version_conflict_engine_exception]: [task:Lens-lens_telemetry]: version conflict, document already exists (current version [16])"} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["error","elasticsearch","data"],"pid":10,"message":"[version_conflict_engine_exception]: [task:Actions-actions_telemetry]: version conflict, document already exists (current version [16])"} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["error","elasticsearch","data"],"pid":10,"message":"[version_conflict_engine_exception]: [task:apm-telemetry-task]: version conflict, document already exists (current version [22])"} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["error","elasticsearch","data"],"pid":10,"message":"[version_conflict_engine_exception]: [task:Alerting-alerting_telemetry]: version conflict, document already exists (current version [16])"} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:35Z","tags":["warning","plugins","reporting"],"pid":10,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}



**Additional context**
I see that the grok pattern for the **Squid** logs is missing in the `pfelk.grok` file.  
a3ilson commented 3 years ago

@SaiKiran449 - thanks for the inquiry!

The squid portion was built utilizing OPNsense. The logs are sent in the json format thus not requiring a grok pattern. However, i see that you are using pfSense which I believe may be sending the squid logs differently.

Are you able to provide setup steps and options? Additionally, if you don’t mind are are able to send squid logs in their raw format (sanitized...i.e. replace ip with non routeable) and I’ll be happy to amend to support pfSenses output.

SaiKiran449 commented 3 years ago

Hi @a3ilson,

Thanks for the insights!

Here are the steps to setup the Squid in pfSense:

Additionally:

Endpoint side:

And then I followed the steps as mentioned in the previous reproduction steps.

Please find the below attached logs of Squid:

<14>Jan  8 13:32:40 (squid-1): 1610112760.309     13 10.0.0.13 TCP_MISS/200 10478 GET http://10.0.0.13:5601/node_modules/@kbn/ui-framework/dist/kui_light.css - HIER_DIRECT/10.0.0.13 text/css
<14>Jan  8 13:32:37 (squid-1): 1610112757.945    221 10.0.0.13 TCP_MISS/426 689 GET https://alive.github.com/_sockets/u/65446107/ws? - HIER_DIRECT/140.82.114.25 text/plain
<14>Jan  8 13:32:37 (squid-1): 1610112757.718    663 10.0.0.13 NONE/200 0 CONNECT alive.github.com:443 - HIER_DIRECT/140.82.114.25 -
<14>Jan  8 13:32:04 (squid-1): 1610112724.040    249 10.0.0.13 TCP_MISS/426 689 GET https://alive.github.com/_sockets/u/65446107/ws? - HIER_DIRECT/140.82.114.25 text/plain
<14>Jan  8 13:13:22 (squid-1): 1610111602.753    345 10.0.0.13 TCP_MISS/200 1106 POST http://ocsp.sca1b.amazontrust.com/ - HIER_DIRECT/13.249.226.51 application/ocsp-response
<14>Jan  8 13:13:29 (squid-1): 1610111609.179      5 10.0.0.13 TCP_MISS/200 1394 POST http://10.0.0.13:5601/api/saved_objects/_bulk_get - HIER_DIRECT/10.0.0.13 application/json

Also, see if you can use the below grok pattern for the Squid logs.

%{POSINT:timestamp}.%{POSINT:timestamp_ms}\s+%{NUMBER:response_time} %{IPORHOST:src_ip} %{WORD:squid_request_status}/%{NUMBER:http_status_code} %{NUMBER:reply_size_include_header} %{WORD:http_method} %{NOTSPACE:request_url} %{NOTSPACE:user} %{WORD:squid}/%{IP:server_ip} %{NOTSPACE:content_type}

Let me know if you need any further information :)

a3ilson commented 3 years ago

This is perfect!

Based on your logs, it appears the squid logs from pfSense are not sent in the JSON format. Let me incorporate this into the grok pattern and if you don't mind, I will likely have you implement and validate that it works.

SaiKiran449 commented 3 years ago

Sure. I will validate it.

a3ilson commented 3 years ago

Almost done with the grok pattern...conforming it to ECS which I'll highlight why in a subsequent message. However, I was curious as the syntax of the squid log file(s) located on pfSense and wondering it syslog-ng would allow sending squid logs in json?

I'll need to get an instance of pfsense 2.5.0 up to check the new output options too...which so far will break a number of things within this project until we account for them.

a3ilson commented 3 years ago

@SaiKiran449 - Alright... I've amended and added support for squid via pfSense. Please make the following changes and let me know if it works...The dashboard may not be fully populated as I noted various differences between the OPNsense (json) output and the output of pfSense. However, we can improve that once, we've verified that the squid pattern is parsing.

Required Amendments:

Once those are amended rebuild/restart.

SaiKiran449 commented 3 years ago

@a3ilson Thanks. It worked perfectly. I can see the logs being parsed and visualized in the dashboard.

image image image

a3ilson commented 3 years ago

Great! Let it run for a bit...but I suspect that there may be a few visualizations that will need tweaking. Just provide a sample of raw squid logs and a screenshot of fields being parsed via Discover (reference below):

Screen Shot 2021-01-08 at 12 20 50

As mentioned previously, I'm working to further enrich this dataset as seen in the following screenshot (note the added fields denoted with the yellow triangles (url.meta and user_agent) providing additional insight/details.

Screen Shot 2021-01-08 at 12 18 48

SaiKiran449 commented 3 years ago

Here are the logs:

<14>Jan  8 17:30:35 (squid-1): 1610127035.894    635 10.0.0.13 NONE/200 0 CONNECT capi.grammarly.com:443 - HIER_DIRECT/75.2.53.94 -
<14>Jan  8 17:32:21 (squid-1): 1610127141.338    220 10.0.0.13 TCP_MISS/426 689 GET https://alive.github.com/_sockets/u/65446107/ws? - HIER_DIRECT/140.82.114.26 text/plain
<14>Jan  8 17:32:20 (squid-1): 1610127140.382    894 10.0.0.13 TCP_MISS/200 435 POST http://10.0.0.13:5601/api/ui_metric/report - HIER_DIRECT/10.0.0.13 application/json

Please find the screenshots below: image image

a3ilson commented 3 years ago

Alright....it doesn't appear that pfSense provides the user.agent fields to populate the two visualizations towards the bottom. However, below is a quick video to get the upper right visualization working:

https://user-images.githubusercontent.com/16884679/104048066-ac323e80-51b0-11eb-9b0d-080004813f2e.mov

You may remove the other two elements and/or amend them to something else. I'll update the dashboard in the future with additional enrichments.

a3ilson commented 3 years ago

Perfect...can we close this issue?

SaiKiran449 commented 3 years ago

However, below is a quick video to get the upper right visualization working:

Thanks for the help. It is working fine.

Yes, you can close this issue.

a3ilson commented 3 years ago

Issue resolved