Closed nvx closed 9 years ago
Thanks for reporting! I've reviewed the snmptrap input and discovered that it doesn't actually use the codec system, so the codec and charset settings are not working for this plugin. We should fix that, and once we do, it should be pretty easy to resolve this bug.
Makes sense. Ideally it should default to BINARY or something sane considering incoming snmp traps can and often do contain binary data (like mac addresses as in my case).
We also have a similar problem but we output to Elasticsearch and to a file.
{:timestamp=>"2015-02-13T14:11:04.430000+0100", :message=>"Opening file", :path=>"/tmp/logstash.log", :level=>:info}
Encoding::UndefinedConversionError: ""\xF8"" from ASCII-8BIT to UTF-8
encode at org/jruby/RubyString.java:7575
to_json at json/ext/GeneratorMethods.java:71
to_json at /local/logstash/lib/logstash/event.rb:148
receive at /local/logstash/lib/logstash/outputs/file.rb:71
handle at /local/logstash/lib/logstash/outputs/base.rb:86
initialize at (eval):124499
call at org/jruby/RubyProc.java:271
output at /local/logstash/lib/logstash/pipeline.rb:273
outputworker at /local/logstash/lib/logstash/pipeline.rb:228
start_outputs at /local/logstash/lib/logstash/pipeline.rb:156
{:timestamp=>"2015-02-13T14:11:04.493000+0100", :message=>"Plugin is finished", :plugin=><LogStash::Inputs::Snmptrap community=>"public", charset=>"ASCII-8BIT", type=>"snmp_trap", host=>"0.0.0.0">, :level=>:info}
Is this the same issue @jordansissel ?
Can you attach your logstash config?
On Monday, February 16, 2015, Simon Lundström notifications@github.com wrote:
We also have a similar problem but we output to Elasticsearch and to a file.
{:timestamp=>"2015-02-13T14:11:04.430000+0100", :message=>"Opening file", :path=>"/tmp/logstash.log", :level=>:info} Encoding::UndefinedConversionError: ""\xF8"" from ASCII-8BIT to UTF-8 encode at org/jruby/RubyString.java:7575 to_json at json/ext/GeneratorMethods.java:71 to_json at /local/logstash/lib/logstash/event.rb:148 receive at /local/logstash/lib/logstash/outputs/file.rb:71 handle at /local/logstash/lib/logstash/outputs/base.rb:86 initialize at (eval):124499 call at org/jruby/RubyProc.java:271 output at /local/logstash/lib/logstash/pipeline.rb:273 outputworker at /local/logstash/lib/logstash/pipeline.rb:228 start_outputs at /local/logstash/lib/logstash/pipeline.rb:156 {:timestamp=>"2015-02-13T14:11:04.493000+0100", :message=>"Plugin is finished", :plugin=><LogStash::Inputs::Snmptrap community=>"public", charset=>"ASCII-8BIT", type=>"snmp_trap", host=>"0.0.0.0">, :level=>:info}
Is this the same issue @jordansissel https://github.com/jordansissel ?
— Reply to this email directly or view it on GitHub https://github.com/elasticsearch/logstash/issues/1636#issuecomment-74512992 .
I can reproduce it with as little as:
input {
snmptrap {
community => "public"
port => 162
type => "snmp_trap"
}
}
output {
if [type] == "snmp_trap" {
file {
codec => "rubydebug"
flush_interval => 1
path => "/tmp/logstash.log"
}
}
}
If I use stdout
this is how the event looks like:
{
"message" => "#<SNMP::SNMPv1_Trap:0x4633cfc7 @enterprise=[1.3.6.1.4.1.9.9.215.2], @timestamp=#<SNMP::TimeTicks:0x3315965 @value=210994513>, @varbind_list=[#<SNMP::VarBind:0x32f89c0c @name=[1.3.6.1.4.1.9.9.215.1.1.8.1.2.1],
NMP::VarBind:0x6aed2063 @name=[1.3.6.1.4.1.9.9.215.1.1.8.1.3.1], @value=#<SNMP::Integer:0x631ea62a @value=210994513>>], @specific_trap=1, @source_ip=\"192.168.122.155\", @agent_addr=#<SNMP::IpAddress:0x4ea421a @value=\"\\xC0\\xA8z\\x9B\">, @generic_trap=6>",
"host" => "192.168.122.155",
"@version" => "1",
"@timestamp" => "2015-02-16T14:32:54.945Z",
"type" => "snmp_trap",
"SNMPv2-SMI::enterprises.9.9.215.1.1.8.1.2.1" => "\x02\x00_dQ\x06\r\xF8X\x00+\x00",
"SNMPv2-SMI::enterprises.9.9.215.1.1.8.1.3.1" => "210994513",
"tags" => [
[0] "_grokparsefailure"
]
}
And this is the error from elasticsearch_http
when it tries to write it to ES:
{
:timestamp=>"2015-02-16T15:32:55.815000+0100",
:message=>"Failed to flush outgoing items",
:outgoing_count=>2624,
:exception=>#<Encoding::UndefinedConversionError: ""\xF8"" from ASCII-8BIT to UTF-8>,
:backtrace=>["org/jruby/RubyString.java:7575:in `encode'",
"json/ext/GeneratorMethods.java:71:in `to_json'",
"/local/logstash/lib/logstash/event.rb:148:in `to_json'",
"/local/logstash/lib/logstash/outputs/elasticsearch_http.rb:207:in `flush'",
"org/jruby/RubyArray.java:2404:in `collect'",
"/local/logstash/lib/logstash/outputs/elasticsearch_http.rb:195:in `flush'",
"/local/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:219:in `buffer_flush'",
"org/jruby/RubyHash.java:1339:in `each'",
"/local/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:216:in `buffer_flush'",
"/local/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:193:in `buffer_flush'",
"/local/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:112:in `buffer_initialize'",
"org/jruby/RubyKernel.java:1521:in `loop'",
"/local/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:110:in `buffer_initialize'"],
:level=>:warn
}
I know it's hard to reproduce, but I can produce a pcap which you can replay with tcpreplay?
For Logstash 1.5.0, we've moved all plugins to individual repositories, so I have moved this issue to https://github.com/logstash-plugins/logstash-input-snmptrap/issues/8. Let's continue the discussion there! :)
I have a simple logstash configuration that reads snmptraps and outputs them to redis which is json encoded. Some messages fail to make it to redis with the logstash log reporting "Failed to convert event to JSON. Invalid UTF-8, maybe?". Looking at the code this appears to originate from within the redis output.
I have tried without specifying a codec, as well as explicitly setting the charset to BINARY. The SNMP traps do contain some non-ASCII characters (binary representations of MAC addresses and IP addresses) but they appear to be properly escaped with \xHH style notation in the output log. The only difference I can spot between messages that make it to redis, compared ones that fail is the MAC address field. An example of a failing message has this field in the message part (Note it appears to be doubly escaped as this is from the error log which itself appears to be json encoded as well):
And the same value again as parsed by the MIBs:
Other MAC addresses that start with FC:F8:AE work, so I can only assume it is the latter half (3C:2E:18) that is breaking the encoding.