Closed WolfangAukang closed 7 years ago
You could try to capture a pcap with the netflow traffic so I can take a look at it. The last 2 errors look like a bug. What device are you exporting netflow from?
seeing same issue with VYATTA 5400, below is the discussion on this:
https://discuss.elastic.co/t/netflow-support-for-brocade-vyatta-routers/83547
here are my logs: http://dpaste.com/1YCGDTM
i wasn't able to find anything with tshark as well like what fields are the culprit which is causing this issue
Thanks for the help @jorritfolmer
It seems that the Juniper router that is giving those issues is not supported by your codec. I don't know if you are still interested to see the pcap (I can send it by email because there is a lot of flow going on).
Yes sure no problem, send me the pcap! Email is in my profile. Do you have more information about the router model and/or Junos version numbers?
I will surely send it to you tomorrow and get the information you asked! The people in charge of administrating that have only given me the router model.
Hello, I have same issue:
[2017-05-25T14:01:11,268][WARN ][logstash.codecs.netflow ] No matching template for flow id 256
[2017-05-25T14:01:11,268][WARN ][logstash.codecs.netflow ] No matching template for flow id 263
Version: Logstash 5.4.0/logstash-codec-netflow 3.4.0 Device: Cisco ASA 5515/IOS 9.6(2)
I've attached pcap file: asa.pcap.zip
Thanks in advance! Regards, Petro
Hi @pmatv your pcap doesn't contain any template records. These are sent periodically by the ASA, and are necessary to decode the netflow data.
The current pcap spans 9 seconds, which is too short to also capture template packets. 2 minutes would be fine.
Hi @jorritfolmer, Sorry the previous pcap was short indeed. I made a longer trace with capturing template packets only. asa-netflow-template.pcapng.zip Let me know if more traces are required. Thanks
@jorritfolmer Excuse me, I totally forgot to send you the pcap for size issues. I can't find your email though
EDIT: Just saw your email. Will send the pcap right now!
Thanks both!
@pmatv Can you check if the most recent commit to master fixes things?
@jorritfolmer Unfortunately it didn't help. I already tried previously to add 298,299 fields in custom file but no success.
@jorritfolmer I am going to make full trace including some events. Afterwards will send to you (if you don't mind of course) with detailed description.
@pmatv This is a decoded sample of what I get with your pcaps and the current master branch:
{
"netflow" => {
"icmp_type" => 0,
"rev_flow_delta_bytes" => 2,
"flowset_id" => 263,
"flow_start_msec" => 1495713636349,
"l4_src_port" => 33962,
"fw_event" => 5,
"fwd_flow_delta_bytes" => 0,
"responderPackets" => 2,
"protocol" => 6,
"fw_ext_event" => 0,
"xlate_src_addr_ipv4" => "10.9.3.13",
"icmp_code" => 0,
"l4_dst_port" => 1433,
"output_snmp" => 3,
"xlate_src_port" => 33962,
"ipv4_dst_addr" => "10.8.11.1",
"xlate_dst_port" => 1433,
"version" => 9,
"flow_seq_num" => 5069440,
"ipv4_src_addr" => "10.9.3.13",
"event_time_msec" => 1495713697509,
"initiatorPackets" => 2,
"input_snmp" => 15,
"conn_id" => 2725907507,
"xlate_dst_addr_ipv4" => "10.8.11.1"
},
@WolfangAukang I think you're experiencing similar issues to issue #9, issue #21.
Quick summary: Whenever 2 netflow exporters have different template layouts but share the same template_id, only one of them can be decoded.
For example if you filter your pcap in Wireshark with cflow.template_id==256
, you see a lot of different netflow exporters all exporting with the same template_id. This is no issue, except there are some that export template_id 256 with 18 fields, and some with 19 fields.
Workaround 1: use the quick hack to logstash-udp-input in issue #21 to give ip source metadata to the netflow codec.
Workaround 2: use multiple logstash instances, one instance for devices with similar template layouts.
Alright, I will try workaround 1. Workaround 2 isn't factible for now. Would this be solved in logstash-input-netflow?
@jorritfolmer I made the changes to the udp.rb file and the conf file with netflow settings, but it seems there is an error:
[ERROR][logstash.inputs.udp ] Exception in inputworker {"exception"=>#<NoMethodError: Direct event field references (i.e. event['field']) have been disabled in favor of using event get and set methods (e.g. event.get('field')). Please consult the Logstash 5.0 breaking changes documentation for more details.>, "backtrace"=>["/opt/apps/elk/logstash/logstash-core-event-java/lib/logstash/event.rb:45:in
method_missing'", "/opt/apps/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-3.1.0/lib/logstash/inputs/udp.rb:127:in inputworker'", "/opt/apps/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-netflow-3.4.0/lib/logstash/codecs/netflow.rb:198:in
decode'", "org/jruby/RubyArray.java:1613:in each'", "/opt/apps/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-netflow-3.4.0/lib/logstash/codecs/netflow.rb:198:in
decode'", "/opt/apps/elk/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/array.rb:208:in each'", "org/jruby/RubyArray.java:1613:in
each'", "/opt/apps/elk/logstash/vendor/bundle/jruby/1.9/gems/bindata-2.4.0/lib/bindata/array.rb:208:in each'", "/opt/apps/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-codec-netflow-3.4.0/lib/logstash/codecs/netflow.rb:196:in
decode'", "/opt/apps/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-3.1.0/lib/logstash/inputs/udp.rb:125:in inputworker'", "/opt/apps/elk/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-3.1.0/lib/logstash/inputs/udp.rb:92:in
udp_listener'"]}`
EDIT: Just saw what's happening. The code on the issue #21 isn't upgraded to the 5.0 version of Logstash, to solve the error presented above. I can upload a gist with the new version, for future references.
Did you get this hack working? Logstash-input instead of codec would indeed durably fix this.
I adapted the code from the UDP plugin to make the hack work, but I'm still receiving the warning of "No matching template for flow id" on the logs, though
Just to be sure, you did set the metadata => true
for the udp port?
The no matching template warnings are normal for the first minute or so, per exporter. There is something wrong if these warnings persist after that period.
Yes, I have set it to true, but the logs are still sending the same warnings.
@WolfangAukang , could you please tell us, how exactly you've adapted the code/hack to meet present versions of logstash plugins?
This is the whole code
# encoding: utf-8
require "date"
require "logstash/inputs/base"
require "logstash/namespace"
require "socket"
require "stud/interval"
# Read messages as events over the network via udp. The only required
# configuration item is `port`, which specifies the udp port logstash
# will listen on for event streams.
#
class LogStash::Inputs::Udp < LogStash::Inputs::Base
config_name "udp"
default :codec, "plain"
# The address which logstash will listen on.
config :host, :validate => :string, :default => "0.0.0.0"
# The port which logstash will listen on. Remember that ports less
# than 1024 (privileged ports) may require root or elevated privileges to use.
config :port, :validate => :number, :required => true
# The maximum packet size to read from the network
config :buffer_size, :validate => :number, :default => 65536
# The socket receive buffer size in bytes.
# If option is not set, the operating system default is used.
# The operating system will use the max allowed value if receive_buffer_bytes is larger than allowed.
# Consult your operating system documentation if you need to increase this max allowed value.
config :receive_buffer_bytes, :validate => :number
# Number of threads processing packets
config :workers, :validate => :number, :default => 2
# This is the number of unprocessed UDP packets you can hold in memory
# before packets will start dropping.
config :queue_size, :validate => :number, :default => 2000
# Should the event's metadata be included?
config :metadata, :validate => :boolean, :default => false
public
def initialize(params)
super
BasicSocket.do_not_reverse_lookup = true
end # def initialize
public
def register
@udp = nil
end # def register
public
def run(output_queue)
@output_queue = output_queue
begin
# udp server
udp_listener(output_queue)
rescue => e
@logger.warn("UDP listener died", :exception => e, :backtrace => e.backtrace)
Stud.stoppable_sleep(5) { stop? }
retry unless stop?
end # begin
end # def run
private
def udp_listener(output_queue)
@logger.info("Starting UDP listener", :address => "#{@host}:#{@port}")
if @udp && ! @udp.closed?
@udp.close
end
@udp = UDPSocket.new(Socket::AF_INET)
# set socket receive buffer size if configured
if @receive_buffer_bytes
@udp.setsockopt(Socket::SOL_SOCKET, Socket::SO_RCVBUF, @receive_buffer_bytes)
end
rcvbuf = @udp.getsockopt(Socket::SOL_SOCKET, Socket::SO_RCVBUF).unpack("i")[0]
if @receive_buffer_bytes && rcvbuf != @receive_buffer_bytes
@logger.warn("Unable to set receive_buffer_bytes to desired size. Requested #{@receive_buffer_bytes} but obtained #{rcvbuf} bytes.")
end
@udp.bind(@host, @port)
@logger.info("UDP listener started", :address => "#{@host}:#{@port}", :receive_buffer_bytes => "#{rcvbuf}", :queue_size => "#{@queue_size}")
@input_to_worker = SizedQueue.new(@queue_size)
@input_workers = @workers.times do |i|
@logger.debug("Starting UDP worker thread", :worker => i)
Thread.new { inputworker(i) }
end
while !stop?
next if IO.select([@udp], [], [], 0.5).nil?
# collect datagram messages and add to inputworker queue
@queue_size.times {
begin
payload, client = @udp.recvfrom_nonblock(@buffer_size)
break if payload.empty?
@input_to_worker.push([payload, client])
rescue IO::EAGAINWaitReadable
break
end
}
end
ensure
if @udp
@udp.close_read rescue nil
@udp.close_write rescue nil
end
end # def udp_listener
def inputworker(number)
LogStash::Util::set_thread_name("<udp.#{number}")
begin
while true
payload, client = @input_to_worker.pop
if @metadata
metadata_to_codec = {}
metadata_to_codec["port"] = client[1]
metadata_to_codec["host"] = client[3]
@codec.decode(payload, metadata_to_codec) do |event|
decorate(event)
event.set("host", client[3]) if event.get("host").nil?
@output_queue.push(event)
end
else
@codec.decode(payload) do |event|
decorate(event)
event.set("host", client[3]) if event.get("host").nil?
@output_queue.push(event)
end
end
end
rescue => e
@logger.error("Exception in inputworker", "exception" => e, "backtrace" => e.backtrace)
end
end # def inputworker
public
def close
@udp.close rescue nil
end
public
def stop
@udp.close rescue nil
end
end # class LogStash::Inputs::Udp
I have the almost same message not going away after a minute or even 10 minutes: "[2017-10-19T11:20:33,707][WARN ][logstash.codecs.netflow ] Can't (yet) decode flowset id 259 from source id 0, because no template to decode it with has been received. This message will usually go away after 1 minute."
I got very less or even no netflow-* entries shown in the discovery mode of kibana. The Visuals don't work at all not even with the given entries.
I'm closing this issue to prevent duplicate issues from filling up the issue tracker. Issues #9 and #21 will remain open to keep track of addressing the root cause.
Version: Logstash 5.5.0/logstash-codec-netflow 3.4.1 Operating System: ubuntu14.04 Config File: input { udp { port => 2055 codec => netflow tags => "netflow_input" } } output{ if "netflow_input" in [tags]{ kafka{ topic_id => "logstash_netflow" codec => "json" bootstrap_servers => "192.168.50.136:9092" } } } error message:No matching template for flow id 1024 my pcap file is too big,but it's from public dataset CICIDS2017 can you help me? @jorritfolmer
@daishu7 try upgrading first, a lot has been fixed since 3.4.1
Hi,
Today I was adjusting a filter that has nothing to do with netflow or its ports. After a logstash reboot I got this message.
[2019-01-29T11:29:30,753][WARN ][logstash.codecs.netflow ] Template length exceeds flowset length, skipping {:template_id=>256, :template_length=>104, :record_length=>100}
Using version: 6.5.4
What can I do to find the root cause of the problem?
Don't waste your time with logstash and netflow. Go for other systems.
Logstash implementation is against RFC. Netflow standard states that template should be saved with template-id and IP address of packet sender. Logstash does not provide UDP layer IP-address to netflow decoder on off-the-shelf implementation. So overlapping template-id sent by multiple IP-sources causes this. Netflow knows how to decode template-id 256 from one source. If another router sends with same template-id (they usually do) you get the error.
There are some kludges around this, netflow codec has support. In kludge you modify udp.rb a bit to send out metadata incl src-IP to upper layer.
aramaki87 notifications@github.com kirjoitti ti 29. tammik. 2019 klo 12.37:
Hi,
Today I was adjusting a filter that has nothing to do with netflow or its ports. After a logstash reboot I got this message.
[2019-01-29T11:29:30,753][WARN ][logstash.codecs.netflow ] Template length exceeds flowset length, skipping {:template_id=>256, :template_length=>104, :record_length=>100}
Using version: 6.5.4
What can I do to find the root cause of the problem?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/logstash-plugins/logstash-codec-netflow/issues/76#issuecomment-458491615, or mute the thread https://github.com/notifications/unsubscribe-auth/AFV-9KHRI_IpAEjEmyiKl_Lqy-nZTmQpks5vICR_gaJpZM4MqaNV .
[@timestamp][WARN ][logstash.codecs.netflow ] No matching template for flow id 257 [@timestamp][WARN ][logstash.codecs.netflow ] No matching template for flow id 256
Template length doesn't fit cleanly into flowset {:template_id=>257, :template_length=>44, :record_length=>1392} Template length doesn't fit cleanly into flowset {:template_id=>256, :template_length=>44, :record_length=>1392}