logstash-plugins / logstash-codec-netflow

Apache License 2.0
79 stars 88 forks source link

Exception in inputworker - Value for year must be in range #157

Open pgastinger opened 6 years ago

pgastinger commented 6 years ago

Hi,

I recently upgraded my IPFIX-ELK-installation to version 6.3.2 on "Ubuntu 16.04.5 LTS" and I get following logstash error message:

[2018-08-23T06:40:52,762][ERROR][logstash.inputs.udp      ] Exception in inputworker {"exception"=>#<RangeError: Value -292275055 for year must be in the range [-292275054,292278993]>, "backtrace"=>["org/jruby/RubyTime.java:1166:in `at'", "org/logstash/ext/JrubyTimestampExtLibrary.java:188:in `at'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-netflow-3.14.1/lib/logstash/codecs/netflow.rb:379:in `block in decode_ipfix'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/bindata-2.4.3/lib/bindata/struct.rb:169:in `block in each_pair'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/bindata-2.4.3/lib/bindata/struct.rb:168:in `each_pair'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-netflow-3.14.1/lib/logstash/codecs/netflow.rb:372:in `block in decode_ipfix'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/bindata-2.4.3/lib/bindata/array.rb:208:in `block in each'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/bindata-2.4.3/lib/bindata/array.rb:208:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-netflow-3.14.1/lib/logstash/codecs/netflow.rb:358:in `decode_ipfix'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-netflow-3.14.1/lib/logstash/codecs/netflow.rb:131:in `block in decode'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/bindata-2.4.3/lib/bindata/array.rb:208:in `block in each'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/bindata-2.4.3/lib/bindata/array.rb:208:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-netflow-3.14.1/lib/logstash/codecs/netflow.rb:130:in `decode'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-udp-3.3.3/lib/logstash/inputs/udp.rb:142:in `inputworker'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-udp-3.3.3/lib/logstash/inputs/udp.rb:111:in `block in udp_listener'"]}
[2018-08-23T06:40:52,781][ERROR][logstash.inputs.udp      ] Exception in inputworker {"exception"=>#<RangeError: Value -292275055 for year must be in the range [-292275054,292278993]>, "backtrace"=>["org/jruby/RubyTime.java:1166:in `at'", "org/logstash/ext/JrubyTimestampExtLibrary.java:188:in `at'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-netflow-3.14.1/lib/logstash/codecs/netflow.rb:379:in `block in decode_ipfix'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/bindata-2.4.3/lib/bindata/struct.rb:169:in `block in each_pair'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/bindata-2.4.3/lib/bindata/struct.rb:168:in `each_pair'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-netflow-3.14.1/lib/logstash/codecs/netflow.rb:372:in `block in decode_ipfix'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/bindata-2.4.3/lib/bindata/array.rb:208:in `block in each'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/bindata-2.4.3/lib/bindata/array.rb:208:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-netflow-3.14.1/lib/logstash/codecs/netflow.rb:358:in `decode_ipfix'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-netflow-3.14.1/lib/logstash/codecs/netflow.rb:131:in `block in decode'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/bindata-2.4.3/lib/bindata/array.rb:208:in `block in each'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/bindata-2.4.3/lib/bindata/array.rb:208:in `each'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-netflow-3.14.1/lib/logstash/codecs/netflow.rb:130:in `decode'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-udp-3.3.3/lib/logstash/inputs/udp.rb:142:in `inputworker'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-udp-3.3.3/lib/logstash/inputs/udp.rb:111:in `block in udp_listener'"]}

After this message, I have to restart the logstash process, otherwise it won't insert any new data into elasticsearch anymore. I can't even restart the process the standard way, I have to kill it:

[2018-08-23T07:26:57,482][ERROR][logstash.shutdownwatcher ] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.

Does anybody have any ideas how to fix this? Thanks!

regards, Peter

jorritfolmer commented 6 years ago

Can you provide a pcap file with the IPFIX traffic that causes this crash?

pgastinger commented 6 years ago

Unfortunately, the pcap trace is rather large and confidential. If it would help, I can share it privately (approx 100MB)

I added a couple of debug outputs to this file: /usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-netflow-3.14.1/lib/logstash/codecs/netflow.rb


              begin
                  event[@target][k.to_s] = LogStash::Timestamp.at(v.snapshot.to_f / 1_000).to_iso8601
              rescue
                  @logger.error("ERROR - START")
                  @logger.error(LogStash::Event)
                  @logger.error("ERROR - END")
                  event[@target][k.to_s] = ""

This dirty hack catches the exception and it does not crash anymore. But it looks like the Logstash::Event is empty for some requests:

[2018-08-27T11:51:02,093][ERROR][logstash.codecs.netflow  ] ERROR - START
[2018-08-27T11:51:02,093][ERROR][logstash.codecs.netflow  ] {}
[2018-08-27T11:51:02,093][ERROR][logstash.codecs.netflow  ] ERROR - END
jorritfolmer commented 6 years ago

Sure share it privately, email in my profile.

jorritfolmer commented 5 years ago

I haven't been able to reproduce the crash by replaying your pcap. Replaying templates + data 256 and 512 from 1 exporter and 1 observation domain also decode fine. All dates are 2018-08. This looks fine too.

Looking at your pcap it seems you point multiple exporters to your logstash instance. Which could work, however the codec cannot differentiate based on src ip of the exporter, because it's a codec not an input plugin.

This means that most likely a template sent by exporter A is used to decode a data packet by exporter B. This could work if exporter A and B are using the exact same firmware and the exact same exporter settings.

We've seen with Cisco devices this leads to similar issues, so most likely too with Juniper stuff. See for example #9 and #21