Closed dajoen closed 8 years ago
I can not reproduce this on a Mac/osx or Ubuntu. Does it index any unified2 records into ElasticSearch, or just stop immediately ?
Can you provide more details like:
cat /proc/meminfo
, and so onDid you compile it from source or use the binary release ? You said you were on CentOS, so I would try compiling it from source on that OS. Be sure you build it using Go 1.6, and see the supported goos and goarch.
It does index unified2 records. The beat just seems to be resouce hungry on memory.
I'm running this on a CentOS 7 machine in Azure. It's an A2 VM with 2 cpu's and 3,5 GB memory.
[root@QAH-VM-MON01 unifiedbeat]# cat /proc/meminfo MemTotal: 3523796 kB MemFree: 405484 kB MemAvailable: 1078300 kB Buffers: 48040 kB Cached: 743376 kB SwapCached: 0 kB Active: 2356992 kB Inactive: 420132 kB Active(anon): 1999336 kB Inactive(anon): 34592 kB Active(file): 357656 kB Inactive(file): 385540 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 1336 kB Writeback: 0 kB AnonPages: 1985804 kB Mapped: 57536 kB Shmem: 48220 kB Slab: 218408 kB SReclaimable: 183024 kB SUnreclaim: 35384 kB KernelStack: 9136 kB PageTables: 38772 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 1761896 kB Committed_AS: 4004964 kB VmallocTotal: 34359738367 kB VmallocUsed: 56656 kB VmallocChunk: 34359679476 kB HardwareCorrupted: 0 kB AnonHugePages: 915456 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 67520 kB DirectMap2M: 3602432 kB
I did compile it from source. Using latest golang version. The binary release was even quicker in running out of memory.
Goos and goarch seem te be correct.
file unifiedbeat
unifiedbeat: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked (uses shared libs), not stripped
Even with 1,1 GB of free memory, the beat runs out of memory.
After you run it, can you provide these files:
I performed a test using a Security Onion (Ubuntu 14.04, I think) sensor with ElasticSearch on another server (also Ubuntu) with the typical maximum sized Snort unified2 log file (128Mb before it rolls over to a new file) ...
sudo rm .unifiedbeat ... so it doesn't resume where it left off last time
sudo ./unifiedbeat -c unifiedbeat.yml
After about 8 minutes, it takes a while to index a 128M unified2 file ... typically you would not be doing this except for the first time you run unifiedbeat as after that it stays in sync with snort/suricata's log files ... in order to provide near real-time events in ElasticSearch:
ls -la /nsm/sensor_data/seconion-eth1/snort-1
-rw------- 1 sguil sguil 134215552 Jun 11 14:16 indexed_1465655244.snort.unified2.1465650596
-rw------- 1 sguil sguil 8442450 Jun 11 14:33 snort.unified2.1465654580
cat /var/log/unifiedbeat/unifiedbeat
. . .
2016-06-11T14:19:30Z INFO U2SpoolAndPublish: spooling and publishing...
2016-06-11T14:27:24Z INFO Indexed file: '/nsm/sensor_data/seconion-eth1/snort-1/snort.unified2.1465650596' renamed: '/nsm/sensor_data/seconion-eth1/snort-1/indexed_1465655244.snort.unified2.1465650596'
. . .
i.e. it renames the processed unified2 log file and proceeds to the latest/newest/next file
Watching memory while running unifiedbeat shows it never uses more than 300MB or so:
ps aux | grep unifiedbeat
root 13878 0.0 0.0 92888 4916 pts/3 S+ 14:19 0:00 sudo ./unifiedbeat -c unifiedbeat.yml
root 13879 11.1 3.0 295760 249724 pts/3 Sl+ 14:19 1:41 ./unifiedbeat -c unifiedbeat.yml
or:
cat /proc/13879/status
. . .
VmPeak: 295760 kB
. . .
The most memory that unifiedbeat is holding will be for loading the GeoIP2 database into memory, as the unified2 records are read and processed in small batches (i.e. bulk_max_size
). So while running it does not use very much memory.
Perhaps your memory issue is from another process on your server ... are you running ElasticSearch on the same server as unifiedbeat ? The Beats are intended as lightweight shippers to ElasticSearch or more commonly to LogStash servers and then to ElasticSearch. Unifiedbeat is intended to be used as near real-time shipper directly to a remote ElasticSearch server. After all, you wouldn't want to bog down your Snort/Suricata sensor server with a heavy process like ElasticSearch, as you may drop important security events.
I am closing this issue as I am unable to reproduce your out of memory exception.
My snort.log file is only 6,8 Megabytes in size. Still getting out of memory errors. Is there a way to prevent those?