haka-security / hakabana

Hakabana monitoring tool using Haka, ElastcSearch and Kibana
http://haka-security.org
Mozilla Public License 2.0
20 stars 7 forks source link

Memory error #6

Open bhennigar opened 9 years ago

bhennigar commented 9 years ago

I'm not sure if I should be posting this here, in haka or lua but since I'm using hakanaba, I'll ask here.

The haka service dies at random and has to be restarted. It can stay up for a few minutes or over an hour but eventually stops.

Errors:

lua: filter: not enough memory
lua: lua panic: not enough memory
core: unload module 'Syslog alert'

The server has 32GB of ram and rarely goes over 50%.

System: Ubuntu 14.04 server 64 bit Haka 0.2.2 amd64 hakabana 0.2.1 amd64 haka-doc 0.2.2 amd64 haka-elasticsearch 0.2.2 amd64 ElasticSearch 1.4.2 haka-geoip 0.2.2 amd64 Kibana 3.1.2

paulfariello commented 9 years ago

Hi bhennigar,

thanks for reporting the issue. We are going to run a bunch of tests to check if we can reproduce. Could you give us more informations concerning your haka configuration ? Which module and dissector are loaded and if possible give us any specific rule you have written ?

bhennigar commented 9 years ago

Hi This is a very basic/clean install. No customization other than setting two interfaces (eth1,eth2) Modules: syslog logger, syslog alert, pcap module.

I haven't written any rules. Start up output:

info  core: load module 'log/syslog.so', Syslog logger
info  core: load module 'alert/syslog.so', Syslog alert
info  core: load module 'packet/pcap.so', Pcap Module
info  core: setting packet mode to pass-through

info  core: loading rule file '/usr/share/haka/hakabana/config.lua'
info  core: initializing thread 0
info  dissector: register new dissector 'raw'
info  pcap:      listening on device eth1
info  pcap:      listening on device eth2
info  dissector: register new dissector 'ipv4'
info  dissector: register new dissector 'tcp'
info  dissector: register new dissector 'tcp_connection'
info  dissector: register new dissector 'icmp'
info  dissector: register new dissector 'udp'
info  dissector: register new dissector 'udp_connection'
info  dissector: register new dissector 'http'
info  dissector: register new dissector 'dns'
info  core:      1 rule(s) on event 'tcp:receive_packet'
info  core:      1 rule(s) on event 'udp_connection:end_connection'
info  core:      1 rule(s) on event 'http:request'
info  core:      1 rule(s) on event 'udp:receive_packet'
info  core:      1 rule(s) on event 'http:response'
info  core:      1 rule(s) on event 'started'
info  core:      1 rule(s) on event 'tcp_connection:end_connection'
info  core:      1 rule(s) on event 'raw:receive_packet'
info  core:      2 rule(s) on event 'udp_connection:new_connection'
info  core:      3 rule(s) on event 'tcp_connection:new_connection'
info  core:      1 rule(s) on event 'raw:send_packet'
info  core:      1 rule(s) on event 'icmp:receive_packet'
info  core:      1 rule(s) on event 'tcp_connection:receive_packet'
info  core:      1 rule(s) on event 'udp_connection:receive_packet'
info  core:      1 rule(s) on event 'dns:query'
info  core:      1 rule(s) on event 'ipv4:receive_packet'
info  core:      19 rule(s) registered

info  core:      starting single threaded processing

info  core:      switch to background
paulfariello commented 9 years ago

We have not found any memory leak with our current tools.

Nevertheless we have some leads but we need more informations.

It may be a problem with unclosed tcp connections. Could you tell us how many connection you are handling ?

# hakactl console
> tcp.connections()
...

Should give you a list of currently opened connections.

I can see that you are listening on 2 interfaces with packet/pcap. This can cause some issue with duplicated packets and therefore unclosed tcp connections. Could you try with only one interface ?

There is known memory limits with luajit especially when built on 64 bits. Could you try to use haka 32 bit ?