path-network / logstash-codec-sflow

Logstash codec plugin to decrypt sflow
Other
35 stars 17 forks source link

Sflow Codec Throwing an error #2

Closed absmith82 closed 7 years ago

absmith82 commented 8 years ago

When I try and start logstash and use the sflow codec I installed the codec from the logstash-plugin install command which seems to pull from the rubygems repo. It looked like that was linked to this github repo so If I am wrong let me know.

this is the error I am getting

{:timestamp=>"2016-07-27T12:41:17.315000-0600", :message=>"fetched an invalid config", :config=>"input { \n\tudp {\n\t\tport => 6343 \n\t\tcodec => sflow \n\t}\n}\n\noutput {\n\n\telasticsearch {\n\n\t\tindex => \"sflow-%{+YYYY.MM.dd}\"\n\n\t}\n}\n\n\n", :reason=>"field 'type' is a reserved name in EthernetFrameData", :level=>:error}

this is my config

input { udp { port => 6343 codec => sflow type => sflow}}

output { file { path => "/tmp/logstashsflow.out"}}

ashangit commented 8 years ago

Can you please provide me the logstash version you are using and a list of all plugin installed with version for each? It will help to reproduce the issue.

absmith82 commented 8 years ago

logstash 2.3.4

logstash-codec-collectd (2.0.4) logstash-codec-dots (2.0.4) logstash-codec-edn (2.0.4) logstash-codec-edn_lines (2.0.4) logstash-codec-es_bulk (2.0.4) logstash-codec-fluent (2.0.4) logstash-codec-graphite (2.0.4) logstash-codec-json (2.1.4) logstash-codec-json_lines (2.1.3) logstash-codec-line (2.1.2) logstash-codec-msgpack (2.0.4) logstash-codec-multiline (2.0.11) logstash-codec-netflow (2.1.1) logstash-codec-oldlogstashjson (2.0.4) logstash-codec-plain (2.0.4) logstash-codec-rubydebug (2.0.7) logstash-codec-sflow (1.0.0) logstash-filter-anonymize (2.0.4) logstash-filter-checksum (2.0.4) logstash-filter-clone (2.0.6) logstash-filter-csv (2.1.3) logstash-filter-date (2.1.6) logstash-filter-dns (2.1.3) logstash-filter-drop (2.0.4) logstash-filter-fingerprint (2.0.5) logstash-filter-geoip (2.0.7) logstash-filter-grok (2.0.5) logstash-filter-json (2.0.6) logstash-filter-kv (2.1.0) logstash-filter-metrics (3.0.2) logstash-filter-multiline (2.0.5) logstash-filter-mutate (2.0.6) logstash-filter-ruby (2.0.5) logstash-filter-sleep (2.0.4) logstash-filter-split (2.0.5) logstash-filter-syslog_pri (2.0.4) logstash-filter-throttle (2.0.4) logstash-filter-urldecode (2.0.4) logstash-filter-useragent (2.0.8) logstash-filter-uuid (2.0.5) logstash-filter-xml (2.2.0) logstash-input-beats (2.2.9) logstash-input-couchdb_changes (2.0.4) logstash-input-elasticsearch (2.0.5) logstash-input-eventlog (3.0.3) logstash-input-exec (2.0.6) logstash-input-file (2.2.5) logstash-input-ganglia (2.0.6) logstash-input-gelf (2.0.7) logstash-input-generator (2.0.4) logstash-input-graphite (2.0.7) logstash-input-heartbeat (2.0.4) logstash-input-http (2.2.3) logstash-input-http_poller (2.0.6) logstash-input-imap (2.0.5) logstash-input-irc (2.0.5) logstash-input-jdbc (3.0.2) logstash-input-kafka (2.0.8) logstash-input-log4j (2.0.7) logstash-input-lumberjack (2.0.7) logstash-input-pipe (2.0.4) logstash-input-rabbitmq (4.1.0) logstash-input-redis (2.0.6) logstash-input-s3 (2.0.6) logstash-input-snmptrap (2.0.4) logstash-input-sqs (2.0.5) logstash-input-stdin (2.0.4) logstash-input-syslog (2.0.5) logstash-input-tcp (3.0.6) logstash-input-twitter (2.2.2) logstash-input-udp (2.0.5) logstash-input-unix (2.0.6) logstash-input-xmpp (2.0.5) logstash-input-zeromq (2.0.4) logstash-output-cloudwatch (2.0.4) logstash-output-csv (2.0.5) logstash-output-elasticsearch (2.7.1) logstash-output-email (3.0.5) logstash-output-exec (2.0.5) logstash-output-file (2.2.5) logstash-output-ganglia (2.0.4) logstash-output-gelf (2.0.5) logstash-output-graphite (2.0.5) logstash-output-hipchat (3.0.4) logstash-output-http (2.1.3) logstash-output-irc (2.0.4) logstash-output-juggernaut (2.0.4) logstash-output-kafka (2.0.5) logstash-output-lumberjack (2.0.6) logstash-output-nagios (2.0.4) logstash-output-nagios_nsca (2.0.5) logstash-output-null (2.0.4) logstash-output-opentsdb (2.0.4) logstash-output-pagerduty (2.0.4) logstash-output-pipe (2.0.4) logstash-output-rabbitmq (3.1.0) logstash-output-redis (2.0.5) logstash-output-s3 (2.0.7) logstash-output-sns (3.0.4) logstash-output-sqs (2.0.5) logstash-output-statsd (2.0.7) logstash-output-stdout (2.0.6) logstash-output-tcp (2.0.4) logstash-output-udp (2.0.4) logstash-output-xmpp (2.0.4) logstash-output-zeromq (2.1.0) logstash-patterns-core (2.0.5)

absmith82 commented 8 years ago

I think I fixed it in flow_record.rb in class EthernetFrameData and in IPV4there is a field named type

this seems to be reserved in newer versions of logstash

I have replaced them with ethernet_type and ipv4_type in that file and it is running, I'm going to keep running the tests to make sure it is working

absmith82 commented 8 years ago

I am exporting sflow data from Nexus 9300 series switches and now my logstash log is filling up with logs like this {:timestamp=>"2016-07-29T12:12:17.303000-0600", :message=>"Unknown sample entreprise 0 - format 4", :level=>:warn} {:timestamp=>"2016-07-29T12:12:26.158000-0600", :message=>"Unknown sample entreprise 0 - format 4", :level=>:warn} {:timestamp=>"2016-07-29T12:12:31.075000-0600", :message=>"Unknown sample entreprise 0 - format 4", :level=>:warn} {:timestamp=>"2016-07-29T12:12:35.995000-0600", :message=>"Unknown sample entreprise 0 - format 4", :level=>:warn} {:timestamp=>"2016-07-29T12:12:36.979000-0600", :message=>"Unknown sample entreprise 0 - format 4", :level=>:warn} {:timestamp=>"2016-07-29T12:12:46.314000-0600", :message=>"Unknown sample entreprise 0 - format 4", :level=>:warn} {:timestamp=>"2016-07-29T12:12:51.224000-0600", :message=>"Unknown sample entreprise 0 - format 4", :level=>:warn} {:timestamp=>"2016-07-29T12:12:56.140000-0600", :message=>"Unknown sample entreprise 0 - format 4", :level=>:warn} {:timestamp=>"2016-07-29T12:12:57.122000-0600", :message=>"Unknown sample entreprise 0 - format 4", :level=>:warn} {:timestamp=>"2016-07-29T12:13:05.966000-0600", :message=>"Unknown sample entreprise 0 - format 4", :level=>:warn}

nothing is making it into elasticsearch

ashangit commented 8 years ago

Thanks for the feedback on the type field being now a reserved field. For the log you have, the current release of this plugin only manage flow sample and counter sample. Your equipment is sending some expanded counter sample sflow format, not managed. I could take a look to add this but I will need at least a wireshark sample of those sflow data. Let me know if it is something possible for you as the data can be sensitive.

absmith82 commented 8 years ago

I'll have to look and see what I can do as far as setting sflow to only export samples and counters, I basically just enabled sflow with the defaults on the switch so I'll look into more information.

Adam Smith Network Administrator [image: Inline image 1][image: Inline image 2] Sundance Institute O:435.658.3456 E:adam_smith@sundance.org www.sundance.org http://www.sundance.org

On Fri, Jul 29, 2016 at 1:39 PM, Nicolas Fraison notifications@github.com wrote:

Thanks for the feedback on the type field being now a reserved field. For the log you have, the current release of this plugin only manage flow sample and counter sample. Your equipment is sending some expanded counter sample sflow format, not managed. I could take a look to add this but I will need at least a wireshark sample of those sflow data. Let me know if it is something possible for you as the data can be sensitive.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/ashangit/logstash-codec-sflow/issues/2#issuecomment-236274466, or mute the thread https://github.com/notifications/unsubscribe-auth/ANIhrF1gxf8qWN1LgtYxX1gOeBD35nthks5qalbmgaJpZM4JXXLk .

ashangit commented 8 years ago

Could you please try with this configuration: input { udp { port => 6343 codec => sflow { snmp_interface => false } type => sflow } }

It won't managed expanded counter sample but other sflow data can be blocked due to the snmp_interface option being true by default

absmith82 commented 8 years ago

using sflow tool this is what my packets look like. This is my first time using sflow so I'm not sure if this is normal or a cisco implementation.

I removed the sample packets because it sounds like you have that information. if this is normal and you need more info on the packet let me know and i'll send you an entire span of the samples to see if there would be anything in there that is different. startDatagram ================================= datagramSourceIP datagramSize 556 unixSecondsUTC 1469823819 datagramVersion 5 agentSubId 100 agent packetSequenceNo 5773 sysUpTime 7273000 samplesInPacket (samples go here) endDatagram =================================

absmith82 commented 8 years ago

I am still getting these kind of errors

{:timestamp=>"2016-07-29T16:50:08.229000-0600", :message=>"Unknown sample entreprise 0 - format 3", :level=>:warn}

only from my cisco nexuses though

ashangit commented 8 years ago

Ok, so your equipment is sending expanded sflow smaple (format 3 and 4) On my side I only work with sflow sample from F5 loadbalancer equipment sending "basic" sflow sample (format 1 and 2). Please provide me a wireshark or tcpdump network traces so I can validate the implementation of those 2 formats.

absmith82 commented 8 years ago

I can get you those samples, however due to the fact that we could be exposing internal network information I would need somewhere to send those to you that is not on a public forum.

Adam Smith Network Administrator [image: Inline image 1][image: Inline image 2] Sundance Institute O:435.658.3456 E:adam_smith@sundance.org www.sundance.org http://www.sundance.org

On Mon, Aug 1, 2016 at 2:28 AM, Nicolas Fraison notifications@github.com wrote:

Ok, so your equipment is sending expanded sflow smaple (format 3 and 4) On my side I only work with sflow sample from F5 loadbalancer equipment sending "basic" sflow sample (format 1 and 2). Please provide me a wireshark or tcpdump network traces so I can validate the implementation of those 2 formats.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/ashangit/logstash-codec-sflow/issues/2#issuecomment-236519600, or mute the thread https://github.com/notifications/unsubscribe-auth/ANIhrOgM_uiyi7eAEWPhpCO_nrWOtLrwks5qba4_gaJpZM4JXXLk .

ashangit commented 8 years ago

A new release of this codec is available. It should now also works with expanded sample. The snmp_interface parameter set to false is not anymore mandatory (already false by default). Thanks for you help providing some samples.

absmith82 commented 8 years ago

Thanks for looking into this, I upgraded the sflow plugin with logstash-plugins and it is working great so far. Your welcome for providing the samples, I appreciate your work on this project. There doesn't seem to be any other projects that are quite this mature for sflow.

Adam Smith Network Administrator [image: Inline image 1][image: Inline image 2] Sundance Institute O:435.658.3456 E:adam_smith@sundance.org www.sundance.org http://www.sundance.org

On Thu, Aug 4, 2016 at 3:16 PM, Nicolas Fraison notifications@github.com wrote:

A new release of this codec is available. It should now also works with expanded sample. The snmp_interface parameter set to false is not anymore mandatory (already false by default). Thanks for you help providing some samples.

— You are receiving this because you modified the open/close state. Reply to this email directly, view it on GitHub https://github.com/ashangit/logstash-codec-sflow/issues/2#issuecomment-237686040, or mute the thread https://github.com/notifications/unsubscribe-auth/ANIhrPY7Dwznf5hUb3g1P7JRZO1Ku43vks5qclaygaJpZM4JXXLk .