Closed SaiKiran449 closed 3 years ago
@SaiKiran449 - thanks for the inquiry!
The squid portion was built utilizing OPNsense. The logs are sent in the json format thus not requiring a grok pattern. However, i see that you are using pfSense which I believe may be sending the squid logs differently.
Are you able to provide setup steps and options? Additionally, if you don’t mind are are able to send squid logs in their raw format (sanitized...i.e. replace ip with non routeable) and I’ll be happy to amend to support pfSenses output.
Hi @a3ilson,
Thanks for the insights!
Here are the steps to setup the Squid in pfSense:
Additionally:
System
-> Cert. Manager
- I used cert for squid HTTPS proxy config.Endpoint side:
And then I followed the steps as mentioned in the previous reproduction steps.
Please find the below attached logs of Squid:
<14>Jan 8 13:32:40 (squid-1): 1610112760.309 13 10.0.0.13 TCP_MISS/200 10478 GET http://10.0.0.13:5601/node_modules/@kbn/ui-framework/dist/kui_light.css - HIER_DIRECT/10.0.0.13 text/css
<14>Jan 8 13:32:37 (squid-1): 1610112757.945 221 10.0.0.13 TCP_MISS/426 689 GET https://alive.github.com/_sockets/u/65446107/ws? - HIER_DIRECT/140.82.114.25 text/plain
<14>Jan 8 13:32:37 (squid-1): 1610112757.718 663 10.0.0.13 NONE/200 0 CONNECT alive.github.com:443 - HIER_DIRECT/140.82.114.25 -
<14>Jan 8 13:32:04 (squid-1): 1610112724.040 249 10.0.0.13 TCP_MISS/426 689 GET https://alive.github.com/_sockets/u/65446107/ws? - HIER_DIRECT/140.82.114.25 text/plain
<14>Jan 8 13:13:22 (squid-1): 1610111602.753 345 10.0.0.13 TCP_MISS/200 1106 POST http://ocsp.sca1b.amazontrust.com/ - HIER_DIRECT/13.249.226.51 application/ocsp-response
<14>Jan 8 13:13:29 (squid-1): 1610111609.179 5 10.0.0.13 TCP_MISS/200 1394 POST http://10.0.0.13:5601/api/saved_objects/_bulk_get - HIER_DIRECT/10.0.0.13 application/json
Also, see if you can use the below grok pattern for the Squid logs.
%{POSINT:timestamp}.%{POSINT:timestamp_ms}\s+%{NUMBER:response_time} %{IPORHOST:src_ip} %{WORD:squid_request_status}/%{NUMBER:http_status_code} %{NUMBER:reply_size_include_header} %{WORD:http_method} %{NOTSPACE:request_url} %{NOTSPACE:user} %{WORD:squid}/%{IP:server_ip} %{NOTSPACE:content_type}
Let me know if you need any further information :)
This is perfect!
Based on your logs, it appears the squid logs from pfSense are not sent in the JSON format. Let me incorporate this into the grok pattern and if you don't mind, I will likely have you implement and validate that it works.
Sure. I will validate it.
Almost done with the grok pattern...conforming it to ECS which I'll highlight why in a subsequent message. However, I was curious as the syntax of the squid log file(s) located on pfSense and wondering it syslog-ng would allow sending squid logs in json?
I'll need to get an instance of pfsense 2.5.0 up to check the new output options too...which so far will break a number of things within this project until we account for them.
@SaiKiran449 - Alright... I've amended and added support for squid via pfSense. Please make the following changes and let me know if it works...The dashboard may not be fully populated as I noted various differences between the OPNsense (json) output and the output of pfSense. However, we can improve that once, we've verified that the squid pattern is parsing.
Once those are amended rebuild/restart.
@a3ilson Thanks. It worked perfectly. I can see the logs being parsed and visualized in the dashboard.
Great! Let it run for a bit...but I suspect that there may be a few visualizations that will need tweaking. Just provide a sample of raw squid logs and a screenshot of fields being parsed via Discover (reference below):
As mentioned previously, I'm working to further enrich this dataset as seen in the following screenshot (note the added fields denoted with the yellow triangles (url.meta and user_agent) providing additional insight/details.
Here are the logs:
<14>Jan 8 17:30:35 (squid-1): 1610127035.894 635 10.0.0.13 NONE/200 0 CONNECT capi.grammarly.com:443 - HIER_DIRECT/75.2.53.94 -
<14>Jan 8 17:32:21 (squid-1): 1610127141.338 220 10.0.0.13 TCP_MISS/426 689 GET https://alive.github.com/_sockets/u/65446107/ws? - HIER_DIRECT/140.82.114.26 text/plain
<14>Jan 8 17:32:20 (squid-1): 1610127140.382 894 10.0.0.13 TCP_MISS/200 435 POST http://10.0.0.13:5601/api/ui_metric/report - HIER_DIRECT/10.0.0.13 application/json
Please find the screenshots below:
Alright....it doesn't appear that pfSense provides the user.agent
fields to populate the two visualizations towards the bottom. However, below is a quick video to get the upper right visualization working:
You may remove the other two elements and/or amend them to something else. I'll update the dashboard in the future with additional enrichments.
Perfect...can we close this issue?
However, below is a quick video to get the upper right visualization working:
Thanks for the help. It is working fine.
Yes, you can close this issue.
Issue resolved
Describe the bug I have deployed the pfelk through docker containers. I am able to receive the data from my pfsense to the elastic and I can see the indices created in Kibana and the data is being added to the index. However, I can't see the visualization in the Squid dashboard
To Reproduce Steps to reproduce the behavior:
Screenshots
Operating System (please complete the following information):
OS (
printf "$(uname -srm)\n$(cat /etc/os-release)\n"
): Linux 5.4.0-59-generic x86_64 NAME="Ubuntu" VERSION="18.04.5 LTS (Bionic Beaver)" ID=ubuntu ID_LIKE=debian PRETTY_NAME="Ubuntu 18.04.5 LTS" VERSION_ID="18.04" HOME_URL="https://www.ubuntu.com/" SUPPORT_URL="https://help.ubuntu.com/" BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/" PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy" VERSION_CODENAME=bionic UBUNTU_CODENAME=bionicVersion of Docker (
docker -v
): Docker version 20.10.1, build 831ebeaVersion of Docker-Compose (
docker-compose -v
): docker-compose version 1.24.0, build 0aa59064Elasticsearch, Logstash, Kibana (please complete the following information):
Version of ELK (
cat /docker-pfelk/.env
): ELK_VERSION=7.10.1Service logs
docker-compose logs pfelk01
docker-compose logs pfelk02
es03 | {"type": "server", "timestamp": "2021-01-08T09:31:27,993Z", "level": "INFO", "component": "o.e.x.s.a.TokenService", "cluster.name": "es-docker-cluster", "node.name": "es03", "message": "refresh keys", "cluster.uuid": "XuOhpVCQ-4ePdZ68tELQ", "node.id": "u2PpmSdrSAmFsXOE7ZxWwg" } es03 | {"type": "server", "timestamp": "2021-01-08T09:31:28,126Z", "level": "INFO", "component": "o.e.x.s.a.TokenService", "cluster.name": "es-docker-cluster", "node.name": "es03", "message": "refreshed keys", "cluster.uuid": "XuOhpVCQ-4ePdZ68tELQ", "node.id": "u2PpmSdrSAmFsXOE7ZxWwg" } es03 | {"type": "server", "timestamp": "2021-01-08T09:31:28,199Z", "level": "INFO", "component": "o.e.c.s.ClusterApplierService", "cluster.name": "es-docker-cluster", "node.name": "es03", "message": "added {{es01}{5xp6gCnEQvG8Ez3j5X_6og}{ClUVqoGoREuh12Y7eees-Q}{172.18.0.3}{172.18.0.3:9300}{cdhilmrstw}{ml.machine_memory=16646402048, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}}, term: 79, version: 1263, reason: ApplyCommitRequest{term=79, version=1263, sourceNode={es02}{3AmE-f8oSlepBeIuHRWaOQ}{UIdhHD5_TAOx_JkhP4rH6g}{172.18.0.4}{172.18.0.4:9300}{cdhilmrstw}{ml.machine_memory=16646402048, ml.max_open_jobs=20, xpack.installed=true, transform.node=true}}", "cluster.uuid": "XuOhpVCQ-4ePdZ68tELQ", "node.id": "u2PpmSdrSAmFsXOE7ZxWwg" }
logstash | [INFO ] 2021-01-08 09:31:40.606 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:5190", :receive_buffer_bytes=>"106496", :queue_size=>"2000"} logstash | [INFO ] 2021-01-08 09:31:40.607 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:5140", :receive_buffer_bytes=>"106496", :queue_size=>"2000"} logstash | [INFO ] 2021-01-08 09:31:40.610 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:5141", :receive_buffer_bytes=>"106496", :queue_size=>"2000"} logstash | [INFO ] 2021-01-08 09:31:40.653 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
kibana | {"type":"log","@timestamp":"2021-01-08T09:31:32Z","tags":["info","plugins","watcher"],"pid":10,"message":"Your basic license does not support watcher. Please upgrade your license."} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:32Z","tags":["info","plugins","monitoring","monitoring","kibana-monitoring"],"pid":10,"message":"Starting monitoring stats collection"} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:33Z","tags":["listening","info"],"pid":10,"message":"Server running at http://0:5601"} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["info","http","server","Kibana"],"pid":10,"message":"http server running at http://0:5601"} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["error","elasticsearch","data"],"pid":10,"message":"[version_conflict_engine_exception]: [task:endpoint:user-artifact-packager:1.0.0]: version conflict, document already exists (current version [9190])"} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["error","elasticsearch","data"],"pid":10,"message":"[version_conflict_engine_exception]: [task:Lens-lens_telemetry]: version conflict, document already exists (current version [16])"} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["error","elasticsearch","data"],"pid":10,"message":"[version_conflict_engine_exception]: [task:Actions-actions_telemetry]: version conflict, document already exists (current version [16])"} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["error","elasticsearch","data"],"pid":10,"message":"[version_conflict_engine_exception]: [task:apm-telemetry-task]: version conflict, document already exists (current version [22])"} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:34Z","tags":["error","elasticsearch","data"],"pid":10,"message":"[version_conflict_engine_exception]: [task:Alerting-alerting_telemetry]: version conflict, document already exists (current version [16])"} kibana | {"type":"log","@timestamp":"2021-01-08T09:31:35Z","tags":["warning","plugins","reporting"],"pid":10,"message":"Enabling the Chromium sandbox provides an additional layer of protection."}