path-network / logstash-codec-sflow

Logstash codec plugin to decrypt sflow
Other
35 stars 17 forks source link

Logstash not receiving sflow as input #22

Closed sgarcesv closed 4 years ago

sgarcesv commented 4 years ago

Hi,

Hope you can help me with the issue I'm having.

Currrent scenario:

We are currently integrating both netflow and sflow in Elastic with logstash version 7.3.2 on a CenOS 7 server. We've managed to visualize netflow correctly but when trying to ingest sflow we are not seeing anything.

For testing purpose, we are basically executing logstash from command line:

On port 6344, we are receiving sflow from a NEXUS 3000 with the following configuration:

feature sflow
sflow sampling-rate 4096
sflow max-sampled-size 128 -- 64
sflow counter-poll-interval 1
sflow max-datagram-size 1400
sflow collector-ip X.X.X.X vrf default
sflow collector-port 6344
sflow agent-ip Y.Y.Y.Y
no sflow extended switch
...
sflow data-source interface ...

Command line:


/usr/share/logstash/bin/logstash -e 'input { udp { port => 6344 codec => sflow }}' --debug
...
[DEBUG] 2019-10-17 13:35:57.248 [[main]<udp] sflow - config LogStash::Codecs::Sflow/@snmp_interface = false
[DEBUG] 2019-10-17 13:35:57.249 [[main]<udp] sflow - config LogStash::Codecs::Sflow/@snmp_community = "public"
[DEBUG] 2019-10-17 13:35:57.249 [[main]<udp] sflow - config LogStash::Codecs::Sflow/@interface_cache_size = 1000
[DEBUG] 2019-10-17 13:35:57.249 [[main]<udp] sflow - config LogStash::Codecs::Sflow/@interface_cache_ttl = 3600
[INFO ] 2019-10-17 13:35:57.296 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:6344"}
[DEBUG] 2019-10-17 13:35:57.420 [Api Webserver] agent - Starting puma
[DEBUG] 2019-10-17 13:35:57.557 [Api Webserver] agent - Trying to start WebServer {:port=>9600}
[INFO ] 2019-10-17 13:35:57.584 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:6344", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[DEBUG] 2019-10-17 13:35:57.785 [pool-3-thread-2] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-17 13:35:57.812 [pool-3-thread-2] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-17 13:35:58.116 [Api Webserver] service - [api-service] start
[INFO ] 2019-10-17 13:35:58.365 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[DEBUG] 2019-10-17 13:35:56.866 [[main]>worker3] CompiledPipeline - Compiled output
...
[DEBUG] 2019-10-17 11:08:58.195 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-17 11:08:59.329 [pool-3-thread-3] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-17 11:08:59.329 [pool-3-thread-3] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-17 11:09:03.195 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-17 11:09:04.336 [pool-3-thread-3] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-17 11:09:04.336 [pool-3-thread-3] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-17 11:09:08.195 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-17 11:09:09.347 [pool-3-thread-3] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-17 11:09:09.347 [pool-3-thread-3] jvm - collector name {:name=>"ConcurrentMarkSweep"}
...

As you can see NO data is coming through logstash nor error or warning shown, but if we capture paquets, we can see sflow is coming in through port 6344:

[root@elastic-netflow ~]# tcpdump -vvv -i any port 6344
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
13:05:30.791669 IP (tos 0x0, ttl 63, id 8334, offset 0, flags [none], proto UDP (17), length 1128)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1100
13:05:30.803948 IP (tos 0x0, ttl 63, id 8335, offset 0, flags [none], proto UDP (17), length 1376)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1348
13:05:30.815049 IP (tos 0x0, ttl 63, id 8336, offset 0, flags [none], proto UDP (17), length 1316)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1288
13:05:30.830503 IP (tos 0x0, ttl 63, id 8337, offset 0, flags [none], proto UDP (17), length 1316)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1288
13:05:31.811592 IP (tos 0x0, ttl 63, id 8338, offset 0, flags [none], proto UDP (17), length 464)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 436
13:05:31.824073 IP (tos 0x0, ttl 63, id 8339, offset 0, flags [none], proto UDP (17), length 1376)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1348
13:05:31.836283 IP (tos 0x0, ttl 63, id 8340, offset 0, flags [none], proto UDP (17), length 1316)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1288
13:05:31.852217 IP (tos 0x0, ttl 63, id 8341, offset 0, flags [none], proto UDP (17), length 1316)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1288
13:05:32.833621 IP (tos 0x0, ttl 63, id 8342, offset 0, flags [none], proto UDP (17), length 464)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 436
...

Same test, but with netflow:

/usr/share/logstash/bin/logstash -e 'input { udp { port => 2055 codec => netflow }}' 
...
[INFO ] 2019-10-17 13:32:24.406 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:2055"}
[INFO ] 2019-10-17 13:32:24.502 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-10-17 13:32:24.606 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:2055", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[INFO ] 2019-10-17 13:32:25.231 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
          "host" => "L.L.L.L",
       "netflow" => {
               "protocol" => 17,
          "ipv4_dst_addr" => "W.W.W.W",
              "tcp_flags" => 0,
               "in_bytes" => 80,
              "icmp_type" => 0,
             "input_snmp" => 0,
           "flow_seq_num" => 30583,
                "src_tos" => 0,
             "flowset_id" => 257,
            "l4_src_port" => 123,
          "mul_igmp_type" => 0,
                "in_pkts" => 1,
            "l4_dst_port" => 123,
          "ipv4_src_addr" => "Z.Z.Z.Z",
            "output_snmp" => 156,
              "direction" => 1,
          "flow_end_msec" => 1571311938000,
                "version" => 9,
        "flow_start_msec" => 1571311938000
    },
      "@version" => "1",
    "@timestamp" => 2019-10-17T11:32:25.000Z
}
...

Any idea?

Thanks in advance!!!

sgarcesv commented 4 years ago

Hi, just one comment. We changed the default port (6343) to 6344 just to discard if it was an issue with the port.

robcowart commented 4 years ago

Can you provide a PCAP of a few of your sFlow records? That way it can be verified that the sFlow messages are not the problem.

robcowart commented 4 years ago

Testing quickly on my Mac I had no issue decoding the sFlow packets...

{
                      "source_id_type" => "0",
                            "protocol" => "1",
                            "stripped" => "4",
             "output_interface_format" => "0",
                            "agent_ip" => "10.5.0.74",
               "input_interface_value" => "369098771",
                            "eth_type" => "33024",
                         "sample_pool" => "3113984000",
              "input_interface_format" => "0",
                             "eth_src" => "a0:a0:a6:a5:ad:a9",
                       "vlan_priority" => "0",
                            "src_port" => "1433",
                            "dst_port" => "9537",
                       "sampling_rate" => "4096",
                        "uptime_in_ms" => "4161712704",
                             "vlan_id" => "5",
                              "src_ip" => "192.2.0.1",
                          "@timestamp" => 2019-10-18T08:48:07.678Z,
                             "eth_dst" => "a0:a0:a6:a5:a4:a4",
              "output_interface_value" => "436318208",
    "frame_length_times_sampling_rate" => 3121152,
                              "dst_ip" => "192.2.0.2",
                                "host" => "127.0.0.1",
                         "ip_protocol" => "6",
                                "type" => "sflow",
                            "@version" => "1",
                        "frame_length" => "762",
                               "drops" => "455824",
                          "sflow_type" => "expanded_flow_sample",
                          "ip_version" => "4",
                     "source_id_index" => "436318208",
                        "sub_agent_id" => "100"
}

Initially I had 2.1.1 of the codec installed. I also updated to 2.1.2 and retested, which also worked.

This leads me to believe that the codec is working fine for your sFlow records. Since you can receive Netflow packets without any issues, I would also assume the UDP input itself is fine. What is left is the OS.

Have you tried to temporarily disable things like firewalld or selinux to ensure that they are not preventing packets from getting through on the sFlow ports?

sgarcesv commented 4 years ago

Hi Rob,

First of all, thanks for the super quick response! :)

Regarding you comments, I've checked firewalld and it is disabled, also iptables:

[root@xxx ~]# systemctl status firewalld
â— firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since mar 2019-10-01 16:47:20 CEST; 2 weeks 2 days ago
     Docs: man:firewalld(1)
 Main PID: 9236 (code=exited, status=0/SUCCESS)

sep 16 16:49:10 elastic-netflow systemd[1]: Starting firewalld - dynamic firewall daemon...
sep 16 16:49:10 elastic-netflow systemd[1]: Started firewalld - dynamic firewall daemon.
oct 01 16:47:19 elastic-netflow systemd[1]: Stopping firewalld - dynamic firewall daemon...
oct 01 16:47:20 elastic-netflow systemd[1]: Stopped firewalld - dynamic firewall daemon.
[root@xxx ~]# 

[root@xxx ~]# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination         

Chain vl (0 references)
target     prot opt source               destination 

I've disabled selinux temporarily:

[root@xxx ~]# getenforce
Permissive
[root@xxx ~]# sestatus
SELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   permissive
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      31

But the result is the same.

However, since you already tested the codec against the NEXUS sflow, and it decoded correctly, I tried without setting any codec (plain). But still not able to see the sflow through logstash, even though we are receiving the sflow packets on port 6344:

[root@xxx ~]# tcpdump -vvv -i any -s 0 port 6344
tcpdump: listening on any, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
16:10:14.138379 IP (tos 0x0, ttl 63, id 21069, offset 0, flags [none], proto UDP (17), length 1280)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1252
16:10:14.423672 IP (tos 0x0, ttl 63, id 21070, offset 0, flags [none], proto UDP (17), length 1420)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1392
16:10:14.818973 IP (tos 0x0, ttl 63, id 21071, offset 0, flags [none], proto UDP (17), length 872)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 844
16:10:14.831220 IP (tos 0x0, ttl 63, id 21072, offset 0, flags [none], proto UDP (17), length 1376)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1348
16:10:14.841444 IP (tos 0x0, ttl 63, id 21073, offset 0, flags [none], proto UDP (17), length 1316)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1288
16:10:15.139471 IP (tos 0x0, ttl 63, id 21074, offset 0, flags [none], proto UDP (17), length 1316)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1288
16:10:15.837950 IP (tos 0x0, ttl 63, id 21075, offset 0, flags [none], proto UDP (17), length 1252)
    10.5.0.74.57829 > elastic-netflow.6344: [udp sum ok] UDP, length 1224
...

Logstash execution with no codec:

[root@elastic-netflow ~]# /usr/share/logstash/bin/logstash -e 'input { udp { port => 6344 }}' --debug

Thread.exclusive is deprecated, use Thread::Mutex
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
[DEBUG] 2019-10-18 16:09:48.194 [main] scaffold - Found module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[DEBUG] 2019-10-18 16:09:48.201 [main] registry - Adding plugin to the registry {:name=>"fb_apache", :type=>:modules, :class=>#<LogStash::Modules::Scaffold:0x509cf73a @directory="/usr/share/logstash/modules/fb_apache/configuration", @module_name="fb_apache", @kibana_version_parts=["6", "0", "0"]>}
[DEBUG] 2019-10-18 16:09:48.202 [main] scaffold - Found module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[DEBUG] 2019-10-18 16:09:48.202 [main] registry - Adding plugin to the registry {:name=>"netflow", :type=>:modules, :class=>#<LogStash::Modules::Scaffold:0x77422a9f @directory="/usr/share/logstash/modules/netflow/configuration", @module_name="netflow", @kibana_version_parts=["6", "0", "0"]>}
[DEBUG] 2019-10-18 16:09:48.571 [LogStash::Runner] runner - -------- Logstash Settings (* means modified) ---------
[DEBUG] 2019-10-18 16:09:48.571 [LogStash::Runner] runner - node.name: "elastic-netflow"
[DEBUG] 2019-10-18 16:09:48.572 [LogStash::Runner] runner - path.data: "/usr/share/logstash/data"
[DEBUG] 2019-10-18 16:09:48.572 [LogStash::Runner] runner - *config.string: "input { udp { port => 6344 }}"
[DEBUG] 2019-10-18 16:09:48.572 [LogStash::Runner] runner - modules.cli: []
[DEBUG] 2019-10-18 16:09:48.572 [LogStash::Runner] runner - modules: []
[DEBUG] 2019-10-18 16:09:48.572 [LogStash::Runner] runner - modules_list: []
[DEBUG] 2019-10-18 16:09:48.572 [LogStash::Runner] runner - modules_variable_list: []
[DEBUG] 2019-10-18 16:09:48.572 [LogStash::Runner] runner - modules_setup: false
[DEBUG] 2019-10-18 16:09:48.572 [LogStash::Runner] runner - config.test_and_exit: false
[DEBUG] 2019-10-18 16:09:48.572 [LogStash::Runner] runner - config.reload.automatic: false
[DEBUG] 2019-10-18 16:09:48.572 [LogStash::Runner] runner - config.reload.interval: 3000000000
[DEBUG] 2019-10-18 16:09:48.572 [LogStash::Runner] runner - config.support_escapes: false
[DEBUG] 2019-10-18 16:09:48.572 [LogStash::Runner] runner - config.field_reference.parser: "STRICT"
[DEBUG] 2019-10-18 16:09:48.572 [LogStash::Runner] runner - metric.collect: true
[DEBUG] 2019-10-18 16:09:48.572 [LogStash::Runner] runner - pipeline.id: "main"
[DEBUG] 2019-10-18 16:09:48.572 [LogStash::Runner] runner - pipeline.system: false
[DEBUG] 2019-10-18 16:09:48.572 [LogStash::Runner] runner - pipeline.workers: 4
[DEBUG] 2019-10-18 16:09:48.573 [LogStash::Runner] runner - pipeline.batch.size: 125
[DEBUG] 2019-10-18 16:09:48.573 [LogStash::Runner] runner - pipeline.batch.delay: 50
[DEBUG] 2019-10-18 16:09:48.573 [LogStash::Runner] runner - pipeline.unsafe_shutdown: false
[DEBUG] 2019-10-18 16:09:48.573 [LogStash::Runner] runner - pipeline.java_execution: true
[DEBUG] 2019-10-18 16:09:48.573 [LogStash::Runner] runner - pipeline.reloadable: true
[DEBUG] 2019-10-18 16:09:48.573 [LogStash::Runner] runner - pipeline.plugin_classloaders: false
[DEBUG] 2019-10-18 16:09:48.573 [LogStash::Runner] runner - path.plugins: []
[DEBUG] 2019-10-18 16:09:48.573 [LogStash::Runner] runner - config.debug: false
[DEBUG] 2019-10-18 16:09:48.573 [LogStash::Runner] runner - *log.level: "debug" (default: "info")
[DEBUG] 2019-10-18 16:09:48.573 [LogStash::Runner] runner - version: false
[DEBUG] 2019-10-18 16:09:48.573 [LogStash::Runner] runner - help: false
[DEBUG] 2019-10-18 16:09:48.573 [LogStash::Runner] runner - log.format: "plain"
[DEBUG] 2019-10-18 16:09:48.573 [LogStash::Runner] runner - http.host: "127.0.0.1"
[DEBUG] 2019-10-18 16:09:48.573 [LogStash::Runner] runner - http.port: 9600..9700
[DEBUG] 2019-10-18 16:09:48.573 [LogStash::Runner] runner - http.environment: "production"
[DEBUG] 2019-10-18 16:09:48.573 [LogStash::Runner] runner - queue.type: "memory"
[DEBUG] 2019-10-18 16:09:48.573 [LogStash::Runner] runner - queue.drain: false
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - queue.page_capacity: 67108864
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - queue.max_bytes: 1073741824
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - queue.max_events: 0
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - queue.checkpoint.acks: 1024
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - queue.checkpoint.writes: 1024
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - queue.checkpoint.interval: 1000
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - queue.checkpoint.retry: false
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - dead_letter_queue.enable: false
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - dead_letter_queue.max_bytes: 1073741824
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - slowlog.threshold.warn: -1
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - slowlog.threshold.info: -1
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - slowlog.threshold.debug: -1
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - slowlog.threshold.trace: -1
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - keystore.classname: "org.logstash.secret.store.backend.JavaKeyStore"
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - keystore.file: "/usr/share/logstash/config/logstash.keystore"
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - path.queue: "/usr/share/logstash/data/queue"
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - path.dead_letter_queue: "/usr/share/logstash/data/dead_letter_queue"
[DEBUG] 2019-10-18 16:09:48.574 [LogStash::Runner] runner - path.settings: "/usr/share/logstash/config"
[DEBUG] 2019-10-18 16:09:48.575 [LogStash::Runner] runner - path.logs: "/usr/share/logstash/logs"
[DEBUG] 2019-10-18 16:09:48.575 [LogStash::Runner] runner - xpack.management.enabled: false
[DEBUG] 2019-10-18 16:09:48.575 [LogStash::Runner] runner - xpack.management.logstash.poll_interval: 5000000000
[DEBUG] 2019-10-18 16:09:48.575 [LogStash::Runner] runner - xpack.management.pipeline.id: ["main"]
[DEBUG] 2019-10-18 16:09:48.575 [LogStash::Runner] runner - xpack.management.elasticsearch.username: "logstash_system"
[DEBUG] 2019-10-18 16:09:48.575 [LogStash::Runner] runner - xpack.management.elasticsearch.hosts: ["https://localhost:9200"]
[DEBUG] 2019-10-18 16:09:48.575 [LogStash::Runner] runner - xpack.management.elasticsearch.ssl.verification_mode: "certificate"
[DEBUG] 2019-10-18 16:09:48.575 [LogStash::Runner] runner - xpack.management.elasticsearch.sniffing: false
[DEBUG] 2019-10-18 16:09:48.575 [LogStash::Runner] runner - xpack.monitoring.enabled: false
[DEBUG] 2019-10-18 16:09:48.575 [LogStash::Runner] runner - xpack.monitoring.elasticsearch.hosts: ["http://localhost:9200"]
[DEBUG] 2019-10-18 16:09:48.575 [LogStash::Runner] runner - xpack.monitoring.collection.interval: 10000000000
[DEBUG] 2019-10-18 16:09:48.575 [LogStash::Runner] runner - xpack.monitoring.collection.timeout_interval: 600000000000
[DEBUG] 2019-10-18 16:09:48.575 [LogStash::Runner] runner - xpack.monitoring.elasticsearch.username: "logstash_system"
[DEBUG] 2019-10-18 16:09:48.575 [LogStash::Runner] runner - xpack.monitoring.elasticsearch.ssl.verification_mode: "certificate"
[DEBUG] 2019-10-18 16:09:48.575 [LogStash::Runner] runner - xpack.monitoring.elasticsearch.sniffing: false
[DEBUG] 2019-10-18 16:09:48.575 [LogStash::Runner] runner - xpack.monitoring.collection.pipeline.details.enabled: true
[DEBUG] 2019-10-18 16:09:48.575 [LogStash::Runner] runner - xpack.monitoring.collection.config.enabled: true
[DEBUG] 2019-10-18 16:09:48.576 [LogStash::Runner] runner - node.uuid: ""
[DEBUG] 2019-10-18 16:09:48.576 [LogStash::Runner] runner - --------------- Logstash Settings -------------------
[WARN ] 2019-10-18 16:09:48.617 [LogStash::Runner] multilocal - Ignoring the 'pipelines.yml' file because modules or command line options are specified
[INFO ] 2019-10-18 16:09:48.626 [LogStash::Runner] runner - Starting Logstash {"logstash.version"=>"7.3.2"}
[DEBUG] 2019-10-18 16:09:48.666 [LogStash::Runner] agent - Setting up metric collection
[DEBUG] 2019-10-18 16:09:48.731 [LogStash::Runner] os - Starting {:polling_interval=>5, :polling_timeout=>120}
[DEBUG] 2019-10-18 16:09:48.990 [LogStash::Runner] jvm - Starting {:polling_interval=>5, :polling_timeout=>120}
[DEBUG] 2019-10-18 16:09:49.086 [LogStash::Runner] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-18 16:09:49.090 [LogStash::Runner] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-18 16:09:49.107 [LogStash::Runner] persistentqueue - Starting {:polling_interval=>5, :polling_timeout=>120}
[DEBUG] 2019-10-18 16:09:49.117 [LogStash::Runner] deadletterqueue - Starting {:polling_interval=>5, :polling_timeout=>120}
[DEBUG] 2019-10-18 16:09:49.173 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Starting agent
[DEBUG] 2019-10-18 16:09:49.442 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Converging pipelines state {:actions_count=>1}
[DEBUG] 2019-10-18 16:09:49.478 [Converge PipelineAction::Create<main>] agent - Executing action {:action=>LogStash::PipelineAction::Create/pipeline_id:main}
[DEBUG] 2019-10-18 16:09:50.284 [Converge PipelineAction::Create<main>] Reflections - going to scan these urls:
jar:file:/usr/share/logstash/logstash-core/lib/jars/logstash-core.jar!/
[INFO ] 2019-10-18 16:09:50.341 [Converge PipelineAction::Create<main>] Reflections - Reflections took 55 ms to scan 1 urls, producing 19 keys and 39 values 
[DEBUG] 2019-10-18 16:09:50.367 [Converge PipelineAction::Create<main>] Reflections - expanded subtype co.elastic.logstash.api.Plugin -> co.elastic.logstash.api.Codec
[DEBUG] 2019-10-18 16:09:50.367 [Converge PipelineAction::Create<main>] Reflections - expanded subtype co.elastic.logstash.api.Plugin -> co.elastic.logstash.api.Input
[DEBUG] 2019-10-18 16:09:50.367 [Converge PipelineAction::Create<main>] Reflections - expanded subtype org.jruby.RubyBasicObject -> org.jruby.RubyObject
[DEBUG] 2019-10-18 16:09:50.367 [Converge PipelineAction::Create<main>] Reflections - expanded subtype java.lang.Cloneable -> org.jruby.RubyBasicObject
[DEBUG] 2019-10-18 16:09:50.367 [Converge PipelineAction::Create<main>] Reflections - expanded subtype org.jruby.runtime.builtin.IRubyObject -> org.jruby.RubyBasicObject
[DEBUG] 2019-10-18 16:09:50.367 [Converge PipelineAction::Create<main>] Reflections - expanded subtype java.io.Serializable -> org.jruby.RubyBasicObject
[DEBUG] 2019-10-18 16:09:50.367 [Converge PipelineAction::Create<main>] Reflections - expanded subtype java.lang.Comparable -> org.jruby.RubyBasicObject
[DEBUG] 2019-10-18 16:09:50.367 [Converge PipelineAction::Create<main>] Reflections - expanded subtype org.jruby.runtime.marshal.CoreObjectType -> org.jruby.RubyBasicObject
[DEBUG] 2019-10-18 16:09:50.368 [Converge PipelineAction::Create<main>] Reflections - expanded subtype org.jruby.runtime.builtin.InstanceVariables -> org.jruby.RubyBasicObject
[DEBUG] 2019-10-18 16:09:50.368 [Converge PipelineAction::Create<main>] Reflections - expanded subtype org.jruby.runtime.builtin.InternalVariables -> org.jruby.RubyBasicObject
[DEBUG] 2019-10-18 16:09:50.368 [Converge PipelineAction::Create<main>] Reflections - expanded subtype co.elastic.logstash.api.Plugin -> co.elastic.logstash.api.Output
[DEBUG] 2019-10-18 16:09:50.368 [Converge PipelineAction::Create<main>] Reflections - expanded subtype co.elastic.logstash.api.Metric -> co.elastic.logstash.api.NamespacedMetric
[DEBUG] 2019-10-18 16:09:50.368 [Converge PipelineAction::Create<main>] Reflections - expanded subtype java.security.SecureClassLoader -> java.net.URLClassLoader
[DEBUG] 2019-10-18 16:09:50.368 [Converge PipelineAction::Create<main>] Reflections - expanded subtype java.lang.ClassLoader -> java.security.SecureClassLoader
[DEBUG] 2019-10-18 16:09:50.368 [Converge PipelineAction::Create<main>] Reflections - expanded subtype java.io.Closeable -> java.net.URLClassLoader
[DEBUG] 2019-10-18 16:09:50.368 [Converge PipelineAction::Create<main>] Reflections - expanded subtype java.lang.AutoCloseable -> java.io.Closeable
[DEBUG] 2019-10-18 16:09:50.368 [Converge PipelineAction::Create<main>] Reflections - expanded subtype java.lang.Comparable -> java.lang.Enum
[DEBUG] 2019-10-18 16:09:50.368 [Converge PipelineAction::Create<main>] Reflections - expanded subtype java.io.Serializable -> java.lang.Enum
[DEBUG] 2019-10-18 16:09:50.369 [Converge PipelineAction::Create<main>] Reflections - expanded subtype co.elastic.logstash.api.Plugin -> co.elastic.logstash.api.Filter
[DEBUG] 2019-10-18 16:09:50.420 [Converge PipelineAction::Create<main>] registry - On demand adding plugin to the registry {:name=>"udp", :type=>"input", :class=>LogStash::Inputs::Udp}
[DEBUG] 2019-10-18 16:09:50.572 [Converge PipelineAction::Create<main>] registry - On demand adding plugin to the registry {:name=>"plain", :type=>"codec", :class=>LogStash::Codecs::Plain}
[DEBUG] 2019-10-18 16:09:50.600 [Converge PipelineAction::Create<main>] plain - config LogStash::Codecs::Plain/@id = "plain_6b41a477-ee7b-4cd7-b359-f63ee74b0d57"
[DEBUG] 2019-10-18 16:09:50.601 [Converge PipelineAction::Create<main>] plain - config LogStash::Codecs::Plain/@enable_metric = true
[DEBUG] 2019-10-18 16:09:50.601 [Converge PipelineAction::Create<main>] plain - config LogStash::Codecs::Plain/@charset = "UTF-8"
[DEBUG] 2019-10-18 16:09:50.631 [Converge PipelineAction::Create<main>] udp - config LogStash::Inputs::Udp/@port = 6344
[DEBUG] 2019-10-18 16:09:50.631 [Converge PipelineAction::Create<main>] udp - config LogStash::Inputs::Udp/@id = "04d720756f6a45b729121f62ae4b0b811062f5cccd05b3de4e14cd13754cb128"
[DEBUG] 2019-10-18 16:09:50.631 [Converge PipelineAction::Create<main>] udp - config LogStash::Inputs::Udp/@enable_metric = true
[DEBUG] 2019-10-18 16:09:50.643 [Converge PipelineAction::Create<main>] udp - config LogStash::Inputs::Udp/@codec = <LogStash::Codecs::Plain id=>"plain_6b41a477-ee7b-4cd7-b359-f63ee74b0d57", enable_metric=>true, charset=>"UTF-8">
[DEBUG] 2019-10-18 16:09:50.644 [Converge PipelineAction::Create<main>] udp - config LogStash::Inputs::Udp/@add_field = {}
[DEBUG] 2019-10-18 16:09:50.644 [Converge PipelineAction::Create<main>] udp - config LogStash::Inputs::Udp/@host = "0.0.0.0"
[DEBUG] 2019-10-18 16:09:50.644 [Converge PipelineAction::Create<main>] udp - config LogStash::Inputs::Udp/@buffer_size = 65536
[DEBUG] 2019-10-18 16:09:50.644 [Converge PipelineAction::Create<main>] udp - config LogStash::Inputs::Udp/@workers = 2
[DEBUG] 2019-10-18 16:09:50.644 [Converge PipelineAction::Create<main>] udp - config LogStash::Inputs::Udp/@queue_size = 2000
[DEBUG] 2019-10-18 16:09:50.644 [Converge PipelineAction::Create<main>] udp - config LogStash::Inputs::Udp/@source_ip_fieldname = "host"
[DEBUG] 2019-10-18 16:09:50.686 [Converge PipelineAction::Create<main>] registry - On demand adding plugin to the registry {:name=>"stdout", :type=>"output", :class=>LogStash::Outputs::Stdout}
[DEBUG] 2019-10-18 16:09:50.700 [Converge PipelineAction::Create<main>] registry - On demand adding plugin to the registry {:name=>"rubydebug", :type=>"codec", :class=>LogStash::Codecs::RubyDebug}
[DEBUG] 2019-10-18 16:09:50.706 [Converge PipelineAction::Create<main>] rubydebug - config LogStash::Codecs::RubyDebug/@id = "rubydebug_6e27ed82-be8a-40fc-8ab5-dd4892abc95a"
[DEBUG] 2019-10-18 16:09:50.706 [Converge PipelineAction::Create<main>] rubydebug - config LogStash::Codecs::RubyDebug/@enable_metric = true
[DEBUG] 2019-10-18 16:09:50.706 [Converge PipelineAction::Create<main>] rubydebug - config LogStash::Codecs::RubyDebug/@metadata = false
[DEBUG] 2019-10-18 16:09:51.384 [Converge PipelineAction::Create<main>] stdout - config LogStash::Outputs::Stdout/@codec = <LogStash::Codecs::RubyDebug id=>"rubydebug_6e27ed82-be8a-40fc-8ab5-dd4892abc95a", enable_metric=>true, metadata=>false>
[DEBUG] 2019-10-18 16:09:51.384 [Converge PipelineAction::Create<main>] stdout - config LogStash::Outputs::Stdout/@id = "716b2c9a9cd4e1ecb949d18808c09bbfcf46945e5621c68584887d7739024c8a"
[DEBUG] 2019-10-18 16:09:51.384 [Converge PipelineAction::Create<main>] stdout - config LogStash::Outputs::Stdout/@enable_metric = true
[DEBUG] 2019-10-18 16:09:51.385 [Converge PipelineAction::Create<main>] stdout - config LogStash::Outputs::Stdout/@workers = 1
[DEBUG] 2019-10-18 16:09:51.450 [Converge PipelineAction::Create<main>] javapipeline - Starting pipeline {:pipeline_id=>"main"}
[WARN ] 2019-10-18 16:09:51.664 [[main]-pipeline-manager] LazyDelegatingGauge - A gauge metric of an unknown type (org.jruby.RubyArray) has been create for key: cluster_uuids. This may result in invalid serialization.  It is recommended to log an issue to the responsible developer/development team.
[INFO ] 2019-10-18 16:09:51.670 [[main]-pipeline-manager] javapipeline - Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, :thread=>"#<Thread:0x204defec run>"}
[INFO ] 2019-10-18 16:09:51.731 [[main]-pipeline-manager] javapipeline - Pipeline started {"pipeline.id"=>"main"}
[DEBUG] 2019-10-18 16:09:51.757 [Converge PipelineAction::Create<main>] javapipeline - Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x204defec run>"}
[DEBUG] 2019-10-18 16:09:51.831 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-18 16:09:51.997 [[main]<udp] udp - Starting UDP worker thread {:worker=>1}
[INFO ] 2019-10-18 16:09:52.058 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[DEBUG] 2019-10-18 16:09:52.188 [Api Webserver] agent - Starting puma
[DEBUG] 2019-10-18 16:09:52.246 [Api Webserver] agent - Trying to start WebServer {:port=>9600}
[DEBUG] 2019-10-18 16:09:52.309 [[main]<udp] plain - config LogStash::Codecs::Plain/@id = "plain_6b41a477-ee7b-4cd7-b359-f63ee74b0d57"
[DEBUG] 2019-10-18 16:09:52.310 [[main]<udp] plain - config LogStash::Codecs::Plain/@enable_metric = true
[DEBUG] 2019-10-18 16:09:52.310 [[main]<udp] plain - config LogStash::Codecs::Plain/@charset = "UTF-8"
[DEBUG] 2019-10-18 16:09:52.313 [[main]<udp] udp - Starting UDP worker thread {:worker=>2}
[DEBUG] 2019-10-18 16:09:52.322 [[main]<udp] plain - config LogStash::Codecs::Plain/@id = "plain_6b41a477-ee7b-4cd7-b359-f63ee74b0d57"
[DEBUG] 2019-10-18 16:09:52.323 [[main]<udp] plain - config LogStash::Codecs::Plain/@enable_metric = true
[DEBUG] 2019-10-18 16:09:52.323 [[main]<udp] plain - config LogStash::Codecs::Plain/@charset = "UTF-8"
[INFO ] 2019-10-18 16:09:52.384 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:6344"}
[DEBUG] 2019-10-18 16:09:52.518 [Api Webserver] service - [api-service] start
[INFO ] 2019-10-18 16:09:52.600 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:6344", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[INFO ] 2019-10-18 16:09:53.166 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[DEBUG] 2019-10-18 16:09:51.811 [[main]>worker1] CompiledPipeline - Compiled output
 P[output-stdout{"codec"=>"rubydebug"}|[str]pipeline:1:10:```
stdout { codec => rubydebug }
```] 
 into 
 org.logstash.config.ir.compiler.ComputeStepSyntaxElement@c96a6eaa
[DEBUG] 2019-10-18 16:09:51.812 [[main]>worker0] CompiledPipeline - Compiled output
 P[output-stdout{"codec"=>"rubydebug"}|[str]pipeline:1:10:```
stdout { codec => rubydebug }
```] 
 into 
 org.logstash.config.ir.compiler.ComputeStepSyntaxElement@c96a6eaa
[DEBUG] 2019-10-18 16:09:51.811 [[main]>worker3] CompiledPipeline - Compiled output
 P[output-stdout{"codec"=>"rubydebug"}|[str]pipeline:1:10:```
stdout { codec => rubydebug }
```] 
 into 
 org.logstash.config.ir.compiler.ComputeStepSyntaxElement@c96a6eaa
[DEBUG] 2019-10-18 16:09:51.812 [[main]>worker2] CompiledPipeline - Compiled output
 P[output-stdout{"codec"=>"rubydebug"}|[str]pipeline:1:10:```
stdout { codec => rubydebug }
```] 
 into 
 org.logstash.config.ir.compiler.ComputeStepSyntaxElement@c96a6eaa
[DEBUG] 2019-10-18 16:09:54.150 [pool-3-thread-1] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-18 16:09:54.153 [pool-3-thread-1] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-18 16:09:56.827 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-18 16:09:59.167 [pool-3-thread-1] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-18 16:09:59.168 [pool-3-thread-1] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-18 16:10:01.826 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-18 16:10:04.177 [pool-3-thread-1] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-18 16:10:04.177 [pool-3-thread-1] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-18 16:10:06.826 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-18 16:10:09.191 [pool-3-thread-1] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-18 16:10:09.192 [pool-3-thread-1] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-18 16:10:11.826 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-18 16:10:14.201 [pool-3-thread-1] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-18 16:10:14.202 [pool-3-thread-1] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-18 16:10:16.826 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-18 16:10:19.211 [pool-3-thread-1] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-18 16:10:19.212 [pool-3-thread-1] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-18 16:10:21.826 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-18 16:10:24.221 [pool-3-thread-1] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-18 16:10:24.222 [pool-3-thread-1] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-18 16:10:26.826 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-18 16:10:29.232 [pool-3-thread-1] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-18 16:10:29.233 [pool-3-thread-1] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-18 16:10:31.826 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-18 16:10:34.241 [pool-3-thread-1] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-18 16:10:34.242 [pool-3-thread-1] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-18 16:10:36.826 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[DEBUG] 2019-10-18 16:10:39.250 [pool-3-thread-1] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-18 16:10:39.250 [pool-3-thread-1] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-18 16:10:41.826 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
...

Also tried installing the hsflowd agent on the server , which is currently generating sflow traffic. In this case, the test is only to check if we are seeing sflow on port 6343 through logstash, since this agent is currently not supported by the logstash-sflow-codec.

The result confuses me even more, since I'm able to see the packets through logstash:

[root@xxx ~]# /usr/share/logstash/bin/logstash -e 'input { udp { port => 6343 }}' --debug 
...
[INFO ] 2019-10-18 16:15:36.509 [Ruby-0-Thread-1: /usr/share/logstash/lib/bootstrap/environment.rb:6] agent - Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[INFO ] 2019-10-18 16:15:36.562 [[main]<udp] udp - Starting UDP listener {:address=>"0.0.0.0:6343"}
[DEBUG] 2019-10-18 16:15:36.662 [Api Webserver] agent - Starting puma
[INFO ] 2019-10-18 16:15:36.701 [[main]<udp] udp - UDP listener started {:address=>"0.0.0.0:6343", :receive_buffer_bytes=>"106496", :queue_size=>"2000"}
[DEBUG] 2019-10-18 16:15:36.725 [Api Webserver] agent - Trying to start WebServer {:port=>9600}
[DEBUG] 2019-10-18 16:15:36.831 [Api Webserver] service - [api-service] start
[INFO ] 2019-10-18 16:15:37.244 [Api Webserver] agent - Successfully started Logstash API endpoint {:port=>9600}
[DEBUG] 2019-10-18 16:15:36.318 [[main]>worker1] CompiledPipeline - Compiled output
...
[DEBUG] 2019-10-18 16:15:38.826 [pool-3-thread-2] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-18 16:15:38.829 [pool-3-thread-2] jvm - collector name {:name=>"ConcurrentMarkSweep"}
[DEBUG] 2019-10-18 16:15:41.376 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.
[WARN ] 2019-10-18 16:15:43.737 [<udp.1] plain - Received an event that has a different character encoding than you configured. {:text=>"\\u0000\\u0000\\u0000\\u0005\\u0000\\u0000\\u0000\\u0001\\n\\u0001P:\\u0000\\u0001\\x86\\xA0\\u0000\\u0000\\u00021\\u00013\\xAB.\\u0000\\u0000\\u0000\\u
...
[DEBUG] 2019-10-18 16:15:43.858 [pool-3-thread-2] jvm - collector name {:name=>"ParNew"}
[DEBUG] 2019-10-18 16:15:43.859 [pool-3-thread-2] jvm - collector name {:name=>"ConcurrentMarkSweep"}
/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/awesome_print-1.7.0/lib/awesome_print/formatters/base_formatter.rb:31: warning: constant ::Fixnum is deprecated
{
      "@version" => "1",
    "@timestamp" => 2019-10-18T14:15:43.761Z,
          "host" => "10.1.85.21",
       "message" => "\\u0000\\u0000\\u0000\\u0005\\u0000\\u0000\\u0000\\u0001\\n\\u0001P:\\u0000\\u0001\\x86\\xA0\\u0000\\u0000\\u00021\\u00013\\xAB.\\u0000\\u0000\\u0000\\u0
...
}
[DEBUG] 2019-10-18 16:15:46.376 [logstash-pipeline-flush] PeriodicFlush - Pushing flush onto pipeline.

This time we are able to see packets received on port 6343.

[root@xxx ~]# tcpdump -vvv -i any -s 0 port 6343
...
16:15:44.105817 IP (tos 0x0, ttl 64, id 38404, offset 0, flags [DF], proto UDP (17), length 772)
    elastic-netflow.56817 > elastic-netflow.sflow: [bad udp cksum 0xc12d -> 0x0c08!] sFlowv5, IPv4 agent elastic-netflow, agent-id 100000, seqnum 516, uptime 18543802, samples 1, length 744
        counter sample (2), length 708, seqnum 516, type 2, idx 1, records 10
            enterprise 0, Unknown (2001) length 36
            enterprise 0, Unknown (2010) length 28
            ...

I really don't understand what is happening...

I can see now, from your tests and testing without the codec, that it is not an issue with the codec.... but what is the problem then? I'm able to see netflow and hsflow but not the NEXUS sflow?

Thanks in advance!

sgarcesv commented 4 years ago

Hi Rob, I found out what was happening!

It looked like a network issue... I saw the ip route table of the server and found out a static route for the netflow device. Bingo!

I've added another static route for the sflow device and now I see the messages coming through logstash and also in Kibana!

Thanks for giving me the nudge along to solve this and also thanks for providing the elastiflow solution.

BR

robcowart commented 4 years ago

Interesting. It probably would have taken a while for me to find that. Glad you got it working.