Closed StefanSa closed 9 months ago
Pinging @elastic/security-external-integrations (Team:Security-External Integrations)
@StefanSa I have a feeling this issue is still that it cannot send it to the correct pipeline.
Version 2.11 is fairly old and I believe the reason you are seeing the UTM packages parsed is because that version still had the old local processing for UTM, from 3.0 the UTM was completely rewritten as it was an old experimental integration.
So the reason it might work for you for UTM is because the local agent is still doing the processing there.
It could be a variety of reasons, but I believe Logstash should be able to determine where it should go using only data_stream: true.
i have the exact same problem ... but i'm not sure if it's a securityonion configuration issue ... or the integration itself
Hi @derelict As a temporary solution i took filebeat last version. This works without problems.
Did you just replace the filebeat binary or the whole elasticagent ? Which Version is working for you ? in my case (just replacing the binary) does not work.
Hi @derelict I have installed a separate filebeat (latest version) instance, so not with the elastic agent. This can be on the so server itself or ideally, on a separate ingest server like mine.
/etc/filebeat/filebeat.yml
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["172.16.34.36:9200"]
allow_older_versions: true
# Protocol - either `http` (default) or `https`.
protocol: "https"
ssl.enabled: true
ssl.verification_mode: none
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
username: "so_elastic"
password: "secret"
indices:
- index: "logs-%{[event.dataset]}-default"
when.has_fields: ['event.dataset']
/etc/filebeat/modules.d/sophos.yml
# Module: sophos
# Docs: https://www.elastic.co/guide/en/beats/filebeat/8.10/filebeat-module-sophos.html
- module: sophos
xg:
enabled: true
# Set which input to use between tcp, udp (default) or file.
var.input: udp
# The interface to listen to syslog traffic. Defaults to
# localhost. Set to 0.0.0.0 to bind to all available interfaces.
var.syslog_host: 0.0.0.0
# The port to listen for syslog traffic. Defaults to 9004.
var.syslog_port: 9005
# firewall default hostname
var.default_host_name: firewall.test.local
# known firewalls
var.known_devices:
- serial_number: "12345678"
hostname: "fwgate01.test.local"
- serial_number: "87654321"
hostname: "fwgate02.test.local"
utm:
enabled: true
# Set which input to use between udp (default), tcp or file.
var.input: tcp
var.syslog_host: 0.0.0.0
var.syslog_port: 9012
# Set paths for the log files when file input is used.
# var.paths:
# Toggle output of non-ECS fields (default true).
var.rsa_fields: true
# Set custom timezone offset.
# "local" (default) for system timezone.
# "+02:00" for GMT+02:00
# var.tz_offset: local
Before you start filebeat with systemctl, you have to transvert the pipelines to the so server
.
filebeat setup --pipelines
Now the data should be transferred from the firewalls via filebeat to the so server
.
Hi @derelict I have installed a separate filebeat (latest version) instance, so not with the elastic agent. This can be on the so server itself or ideally, on a separate ingest server like mine.
Ok. Cool. I will give it a try then. Thank you very much.
Closing as the newer version of the Sophos integration (v3.8.1) should address this issue. @derelict @StefanSa if you're still having issues, please feel free to re-open and we can investigate.
Hi there I have a problem here which i do not quite understand. Have deployed sophos.utm/xg (v2.11.0) integration here, via elastic agent (v8.8.2). While
sophos.utm
parses the fields without problems, this does not happen withsophos.xg
. Only the message is transmitted, but not the fields are decoded. The interesting thing is, if i test the pipelinelogs-sophos.xg-2.11.0
in Kibana with this message, all fields are decoded correctly (source.ip etc.). This almost looks like theelastic.agent
forsophos.xg
is not selecting the correct pipeline.Does anyone have any idea about this behavior ?
logstash output pipeline:
The transfered value that contains only the message:
Correct result with the test of the pipeline: