robcowart / elastiflow

Network flow analytics (Netflow, sFlow and IPFIX) with the Elastic Stack
Other
2.48k stars 597 forks source link

[elastiflow] Ignoring Netflow version v0 #539

Closed syellayagari closed 4 years ago

syellayagari commented 4 years ago

Hi I have setup this yesterday and i can not see any graphs on Kibana.

I am sending data from Juniper QFX. When i look into logstash logs i see

[2020-05-12T12:48:54,498][WARN ][logstash.codecs.netflow ][elastiflow] Ignoring Netflow version v0 [2020-05-12T12:48:54,606][WARN ][logstash.codecs.netflow ][elastiflow] Ignoring Netflow version v0 [2020-05-12T12:48:54,606][WARN ][logstash.codecs.netflow ][elastiflow] Ignoring Netflow version v0 [2020-05-12T12:48:54,708][WARN ][logstash.codecs.netflow ][elastiflow] Ignoring Netflow version v0 [2020-05-12T12:48:54,708][WARN ][logstash.codecs.netflow ][elastiflow] Ignoring Netflow version v0

When i change the udp port on my Juniper to 2055 i get this logs and when i send on 6343 i dont see any logs.

No restrictions whatsoever as the both Switch and server are local.

robcowart commented 4 years ago

It looks like you are trying to send sFlow data (default port 6343) to the port on which the Netflow input is listening (default 2055). You can send Netflow and IPFIX to the same port, but sFlow must be sent to a separate port.

syellayagari commented 4 years ago

Thank you Rob. I have tried with sending sflow on 6343. I dont see any logs in logstash. Is there any thing i am missing?

set protocols sflow sample-rate ingress 1000 set protocols sflow sample-rate egress 1000 set protocols sflow source-ip xxxx set protocols sflow collector xxxx udp-port 6343 set protocols sflow interfaces et-0/0/48.0

Server Outputs

iptables -L -v -n Chain INPUT (policy ACCEPT 38883 packets, 29M bytes) pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 31215 packets, 25M bytes) pkts bytes target prot opt in out source destination

netstat -ulnp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name udp 0 0 0.0.0.0:2055 0.0.0.0: 28211/java udp 0 0 0.0.0.0:4739 0.0.0.0: 28211/java udp 0 0 0.0.0.0:6343 0.0.0.0:* 28211/java

robcowart commented 4 years ago

If you expand the amount of time in your dashboards do you see any records in the past or future. Make sure that you aren't suffering from system time settings being off.

syellayagari commented 4 years ago

It says nodata to display. attached a screen shot Screenshot 2020-05-12 at 14 52 41 Screenshot 2020-05-12 at 15 11 00

robcowart commented 4 years ago

Try starting Logstash in a terminal using this simple pipeline...

input {
  udp {
    host => "0.0.0.0"
    port => 6343
    codec => sflow
  }
}

output {
  stdout {
    codec => rubydebug { }
  }
}

If you get no output then there is something like a firewall config blocking the packets from reaching Logstash.

syellayagari commented 4 years ago

I added this under

/etc/logstash/conf.d cat test.conf input { udp { host => "0.0.0.0" port => 6343 codec => sflow } }

output { stdout { codec => rubydebug { } } }

Then added new Pipeline

cat pipelines.yml

This file is where you define your pipelines. You can define multiple.

For more information on multiple pipelines, see the documentation:

https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

Then reloaded daemon and started logstash (systemctl start logstash) No out put at all.

root@us-wal-savv-ntd-01 ~]# sudo systemctl status logstash ● logstash.service - logstash Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/logstash.service.d └─elastiflow.conf Active: active (running) since Tue 2020-05-12 11:01:15 BST; 4h 42min ago

syellayagari commented 4 years ago

I have a question, You said when we start logstash initially it take time but for me it was instant. Is i am missing something here?

robcowart commented 4 years ago

I said "in a terminal", i.e. from the CLI.

/PATH/TO/logstash/bin/logstash --path.config /PATH/TO/test.conf

The raw sFlow should print to the screen if it is being received.

syellayagari commented 4 years ago

Hi Rob

I am sorry but i am very new to ELK stack and after running the simple pipeline i dont see any raw SFLOW data. However i see below logs.

[root@us-wal-savv-ntd-01 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.9.0.jar) to method sun.nio.ch.NativeThread.signal(long) WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console [WARN ] 2020-05-12 23:59:17.991 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified [INFO ] 2020-05-12 23:59:17.998 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.6.2"} [INFO ] 2020-05-12 23:59:19.625 [Converge PipelineAction::Create

] Reflections - Reflections took 33 ms to scan 1 urls, producing 20 keys and 40 values [WARN ] 2020-05-12 23:59:21.169 [[main]-pipeline-manager] LazyDelegatingGauge - A gauge metric of an unknown type (org.jruby.RubyArray) has been created for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team. [INFO ] 2020-05-12 23:59:21.172 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>40, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>5000, "pipeline.sources"=>["/etc/logstash/conf.d/test.conf"], :thread=>"#"} [INFO ] 2020-05-12 23:59:22.112 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"} [INFO ] 2020-05-12 23:59:22.162 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [INFO ] 2020-05-12 23:59:22.169 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:6343"} [ERROR] 2020-05-12 23:59:22.220 [[main]<udp] udp - UDP listener died {:exception=>#<Errno::EADDRINUSE: Address already in use - bind - Address already in use>, :backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:203:in bind'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:116:inudp_listener'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:68:in run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:328:ininputworker'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:320:in block in start_input'"]} [INFO ] 2020-05-12 23:59:22.373 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9601} [INFO ] 2020-05-12 23:59:27.226 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:6343"} [ERROR] 2020-05-12 23:59:27.228 [[main]<udp] udp - UDP listener died {:exception=>#<Errno::EADDRINUSE: Address already in use - bind - Address already in use>, :backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:203:inbind'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:116:in udp_listener'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:68:inrun'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:328:in inputworker'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:320:inblock in start_input'"]} [INFO ] 2020-05-12 23:59:32.228 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:6343"}

syellayagari commented 4 years ago

I also enabled on Fortinet device and i get different errors. The ones which you already mentioned in known issues. However, I dont see nay improvment even after 1 Hour.

The below is for netflow on fortinet

[2020-05-12T23:27:28,790][WARN ][logstash.codecs.netflow ][elastiflow] Can't (yet) decode flowset id 262 from source id 1, because no template to decode it with has been received. This message will usually go away after 1 minute. [2020-05-12T23:27:28,790][WARN ][logstash.codecs.netflow ][elastiflow] Can't (yet) decode flowset id 262 from source id 1, because no template to decode it with has been received. This message will usually go away after 1 minute.

robcowart commented 4 years ago

Did you first stop the instance os Logstash running the ElastiFlow Pipeline as a daemon? It says something else already has the port open.

On Wed, May 13, 2020 at 1:16 AM syellayagari notifications@github.com wrote:

Hi Rob

I am sorry but i am very new to ELK stack and after running the simple pipeline i dont see any raw SFLOW data. However i see below logs.

[root@us-wal-savv-ntd-01 conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release. WARNING: An illegal reflective access operation has occurred WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.9.0.jar) to method sun.nio.ch.NativeThread.signal(long) WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations WARNING: All illegal access operations will be denied in a future release WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console [WARN ] 2020-05-12 23:59:17.991 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified [INFO ] 2020-05-12 23:59:17.998 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.6.2"} [INFO ] 2020-05-12 23:59:19.625 [Converge PipelineAction::Create

] Reflections - Reflections took 33 ms to scan 1 urls, producing 20 keys and 40 values [WARN ] 2020-05-12 23:59:21.169 [[main]-pipeline-manager] LazyDelegatingGauge - A gauge metric of an unknown type (org.jruby.RubyArray) has been created for key: cluster_uuids. This may result in invalid serialization. It is recommended to log an issue to the responsible developer/development team. [INFO ] 2020-05-12 23:59:21.172 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>40, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>5000, "pipeline.sources"=>["/etc/logstash/conf.d/test.conf"], :thread=>"#"} [INFO ] 2020-05-12 23:59:22.112 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"} [INFO ] 2020-05-12 23:59:22.162 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [INFO ] 2020-05-12 23:59:22.169 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:6343"} [ERROR] 2020-05-12 23:59:22.220 [[main]<udp] udp - UDP listener died {:exception=>#<Errno::EADDRINUSE: Address already in use - bind - Address already in use>, :backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:203:in bind'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:116:in udp_listener'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:68:in run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:328:in inputworker'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:320:in block in start_input'"]} [INFO ] 2020-05-12 23:59:22.373 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9601} [INFO ] 2020-05-12 23:59:27.226 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:6343"} [ERROR] 2020-05-12 23:59:27.228 [[main]<udp] udp - UDP listener died {:exception=>#<Errno::EADDRINUSE: Address already in use - bind - Address already in use>, :backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:203:in bind'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:116:in udp_listener'", "/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-udp-3.3.4/lib/logstash/inputs/udp.rb:68:in run'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:328:in inputworker'", "/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:320:in block in start_input'"]} [INFO ] 2020-05-12 23:59:32.228 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:6343"}

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/robcowart/elastiflow/issues/539#issuecomment-627650164, or unsubscribe https://github.com/notifications/unsubscribe-auth/ACOZHKTFPLW7TXV3SOYPM4LRRHKGPANCNFSM4M6ZH34Q .

syellayagari commented 4 years ago

Thank you For the tip. Just removed some IP and other info from the output I ran the file with netflow codec and i see the raw data but kibana is not able to show me

{ "host" => "", "netflow" => { "xlate_dst_addr_ipv4" => "", "in_bytes" => 57657, "out_pkts" => 54, "protocol" => 6, "out_bytes" => 57657, "last_switched" => "2020-05-12T23:35:48.940Z", "version" => 9, "postIpDiffServCodePoint" => 255, "xlate_dst_port" =>, "l4_src_port" =>, "flow_end_reason" => 3, "flow_seq_num" => 1715127, "forwarding_status" => { "status" => 1, "reason" => 0 }, "l4_dst_port" =>, "xlate_src_addr_ipv4" => "", "application_id" => "20..12356..0", "ipv4_dst_addr" => "", "ipv4_src_addr" => "", "xlate_src_port" => 0, "first_switched" => "2020-05-12T23:35:42.630Z", "output_snmp" => 17, "flowset_id" => 262, "input_snmp" => 20, "in_pkts" => 54 }, "@version" => "1", "@timestamp" => 2020-05-12T23:35:49.000Z }

By the way this is fortinet sample.

I will post Sflow output now.

syellayagari commented 4 years ago

Sflow also shows me raw output

[INFO ] 2020-05-13 00:26:39.453 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"} [INFO ] 2020-05-13 00:26:39.495 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:6343"} [INFO ] 2020-05-13 00:26:39.508 [Agent thread] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]} [INFO ] 2020-05-13 00:26:39.531 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:6343", :receive_buffer_bytes=>"106496", :queue_size=>"2000"} [INFO ] 2020-05-13 00:26:39.665 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600} /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated { "sub_agent_id" => "16", "eth_type" => "2048", "dst_port" => "", "src_ip" => "", "frame_length_times_sampling_rate" => 24288000, "eth_src" => "", "sampling_rate" => "16000", "eth_dst" => "", "src_vlan" => "", "dst_vlan" => "", "@timestamp" => 2020-05-12T23:26:39.945Z, "frame_length" => "1518", "ip_protocol" => "6", "ip_version" => "4", "sample_pool" => "1852567344", "input_interface" => "531", "host" => "", "uptime_in_ms" => "2520637924", "src_port" => "", "protocol" => "1", "source_id_type" => "0", "output_interface" => "519", "dst_ip" => "", "drops" => "0", "dst_priority" => "0", "sflow_type" => "flow_sample", "agent_ip" => "", "@version" => "1", "stripped" => "4", "src_priority" => "0", "source_id_index" => "531" }

I have stripped IPs from the output. This is Sflow from Juniper

syellayagari commented 4 years ago

Hi Rob

I have restarted Kibana and Elasticsearch and i can see data. But only for Netflow not Sflow Source

Thank you very much for your help. I will play around and provide feed back via official company site.

Can you suggest me How i can integrate this solution for port mirrored traffic rather than flows?

syellayagari commented 4 years ago

Hi Rob, Anything i can look into as the SFlow Source is not picked up by Kibana. I see Logstash is getting the flows as per my output which i posted above. How do i check its actually creating index in Elasticsearch DB?

Thank you in Advance

syellayagari commented 4 years ago

Sflow is working after rebooting the server.

Thank you very much Rob.