theangryangel / logstash-output-jdbc

JDBC output for Logstash
MIT License
256 stars 101 forks source link

Issue using JSON created field in statement. #151

Open john-michaelburke opened 5 years ago

john-michaelburke commented 5 years ago

Having this issue on 7.1.1. I have looked at the definition of this flag, event_as_json_keyword, and it looks like this will attempt to do what I have already done with the Json filter in logstash.

My issue: I decode jsons with logstash into a bunch of fields and then would like to send it off to a postgresdb. It seems as though it is not finding these newly generated fields. I have verified this is a valid json and is properly ingested into elasticsearch. As you can see I also am able to use these generated fields to create a new index which is then ingestable by the jdbc output: mutate { add_field => [ "index_name", "%{[MTE][test_station]}" ] }

JDBC Output error message:

JDBC - Exception. Not retrying {:exception=>#<RuntimeError: Invalid Fi
eldReference: `%{[MTE][serial_number]}`>, :statement=>"IN

My logstash config:

input {
    beats {
        client_inactivity_timeout => 12000000
        port => 5044
    }
}
filter {
    mutate {
        gsub => [
            "message", "\n", "",
            "message", " ", ""
        ]
    }

    json {
        source => "message"
    }
    mutate {
        copy => {"[MTE][timestamp]" => "timestamp"}
    }
    mutate {
        gsub => [
            "timestamp", "T", ""
        ]
    }
    date {
        match => ["timestamp", "YYYYMMddHHmmssZ"]
        timezone => "UTC"
        target => "@timestamp"
    }
    mutate {
        remove_field => "timestamp"
    }
    mutate {
        add_field => [ "index_name", "%{[MTE][test_station]}" ]
    }
    mutate {
        lowercase => "index_name"
    }
}   

output {
    elasticsearch {
        hosts => "elasticsearch:9200"
    manage_template => true                 
        index => "%{[index_name]}"
    template => "/usr/share/logstash/logstash-template.json"
    template_name => "mte_dynamic_template"
    template_overwrite => true
    }
    jdbc {
        enable_event_as_json_keyword => true
        connection_string => 'jdbc:postgresql://postgres:5432/postgres'
        username => 'asdf'
        password => 'asdf'
        #driver_jar_path=>'/usr/share/logstash/postgres.jar'
        statement => [ "INSERT INTO mfg_db (timestamp, test_name, serial_number, test_pass) VALUES(CAST (? as timestamp), ?, ?, ?)",
                                     "%{@timestamp}", "%{index_name}", "%{[MTE][serial_number]}" , "%{[MTE][test_pass]}"]
    }

}
theangryangel commented 5 years ago

Enable event as Json doesn’t do what you think it does. You don’t seem to be using its intended purpose, so start off my removing that setting.

Second double check that the fields actually exist in your event. Use the stdout output to confirm you have the serial number field.

After that I’d also double check the nested syntax. When using the %{} notation this is being passed directly to the sprintf function that logstash provides on the event. I’m not able to double check from where I am tonight that it’s definitely correct.

Sent from my iPhone

On 31 May 2019, at 19:11, John-Michael Burke notifications@github.com wrote:

Having this issue on 7.1.1. I have looked at the definition of this flag, event_as_json_keyword, and it looks like this will attempt to do what I have already done with the Json filter in logstash.

My issue: I decode jsons with logstash into a bunch of fields and then would like to send it off to a postgresdb. It seems as though it is not finding these newly generated fields. I have verified this is a valid json and is properly ingested into elasticsearch. As you can see I also am able to use these generated fields to create a new index which is then ingestable by the jdbc output: mutate { add_field => [ "index_name", "%{[MTE][test_station]}" ] }

JDBC Output error message:

JDBC - Exception. Not retrying {:exception=>#<RuntimeError: Invalid Fi eldReference: %{[MTE][serial_number]}>, :statement=>"IN My logstash config:

input { beats { client_inactivity_timeout => 12000000 port => 5044 } } filter { mutate { gsub => [ "message", "\n", "", "message", " ", "" ] }

json { source => "message" } mutate { copy => {"[MTE][timestamp]" => "timestamp"} } mutate { gsub => [ "timestamp", "T", "" ] } date { match => ["timestamp", "YYYYMMddHHmmssZ"] timezone => "UTC" target => "@timestamp" } mutate { remove_field => "timestamp" } mutate { add_field => [ "index_name", "%{[MTE][test_station]}" ] } mutate { lowercase => "index_name" } }

output { elasticsearch { hosts => "elasticsearch:9200" manage_template => true
index => "%{[index_name]}" template => "/usr/share/logstash/logstash-template.json" template_name => "mte_dynamic_template" template_overwrite => true } jdbc { enable_event_as_json_keyword => true connection_string => 'jdbc:postgresql://postgres:5432/postgres' username => 'asdf' password => 'asdf'

driver_jar_path=>'/usr/share/logstash/postgres.jar'

  statement => [ "INSERT INTO mfg_db (timestamp, test_name, serial_number, test_pass) VALUES(CAST (? as timestamp), ?, ?, ?)",
                               "%{@timestamp}", "%{index_name}", "%{[MTE][serial_number]}" , "%{[MTE][test_pass]}"]

}

}

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

theangryangel commented 5 years ago

If the above doesnt help shout and I’ll try and reproduce the issue when I have a moment.

Please do make sure that the serial_number field is definitely in your event first though please (you have no idea how many times I’ve gone digging into something for someone only for the field to be literally not there). I’m not seeing it in the rest of your config, which means it must be coming from your original event.

john-michaelburke commented 5 years ago

Thank you for reaching out!! In stdout it appears to be a field albeit nested. I should say as long as I copy the nested fields into a new field, everything works fine. Ive added back in one of the nested fields which I copy into the serial_number new field... You can see where it fails here, the new field works but it fails when i directly insert the nested field: ] JDBC - Exception. Not retrying {:exception=>#<RuntimeError: Invalid FieldReference:%{[MTE][serial_number]}>, :statement=>"INSERT INTO mfg_db (timestamp, test_name, serial_number, test_pass, error_code) VALUES(CAST (? as timestamp), ?, ?, ?, ?)",

input {
    beats {
        client_inactivity_timeout => 12000000
        port => 5044
    }
}
filter {
    mutate {
        gsub => [
            "message", "\n", "",
            "message", " ", ""
        ]
    }
    json {
        source => "message"
    }
    mutate {
        copy => {"[MTE][timestamp]" => "timestamp"}
    }
    mutate {
        gsub => [
            "timestamp", "T", ""
        ]
    }
    date {
        match => ["timestamp", "YYYYMMddHHmmssZ"]
        timezone => "UTC"
        target => "@timestamp"
    }
    mutate {
        remove_field => "timestamp"
    }
    mutate {
        add_field => [ "index_name", "%{[MTE][test_station]}" ]
    }
    mutate {
        lowercase => "index_name"
    }
    mutate {
        add_field => [ "serial_number", "%{[MTE][serial_number]}" ]
    }
    mutate {
        add_field => [ "test_pass", "%{[MTE][test_pass]}" ]
    }
    mutate {
        add_field => [ "error_code", "%{[MTE][error_code]}" ]
    }
}   
output {
    elasticsearch {
        hosts => "elasticsearch:9200"
    manage_template => true                 
        index => "%{[index_name]}"
    template => "/usr/share/logstash/logstash-template.json"
    template_name => "mte_dynamic_template"
    template_overwrite => true
    }
    jdbc {
        connection_string => 'jdbc:postgresql://postgres:5432/postgres'
        username => 'asdf'
        password => 'asdf'
        statement => [ "INSERT INTO mfg_db (timestamp, test_name, serial_number, test_pass, error_code) VALUES(CAST (? as timestamp), ?, ?, ?, ?)",
                                     "%{@timestamp}", "%{index_name}", "%{serial_number}" , "%{[MTE][serial_number]}", "%{error_code}"]
    }

}

I had the thought that maybe it was an issue with logstash so I tried putting the nested field into the elasticsearch output to see if id get a similar error. Although I did get an error it is due to the field not being lowercase: logstash | [2019-05-31T20:58:37,154][ERROR][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"1020038501S1918000172", :_type=>"_doc", :routing=>nil}, #<LogStash::Event:0x12b9ae1a>], :response=>{"index"=>{"_index"=>"1020038501S1918000172", "_type"=>"_doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"invalid_index_name_exception", "reason"=>"Invalid index name [1020038501S1918000172], must be lowercase", "index_uuid"=>"_na_", "index"=>"1020038501S1918000172"}}}} logstash | /usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-output-jdbc-5.4.0/lib/logstash/outputs/jdbc.rb:283: warning: constant ::Fixnum is deprecated

Here are the edits I made to my logstash conf:
`input {
    beats {
        client_inactivity_timeout => 12000000
        port => 5044
    }
}
filter {
    mutate {
        gsub => [
            "message", "\n", "",
            "message", " ", ""
        ]
    }
    json {
        source => "message"
    }
    mutate {
        copy => {"[MTE][timestamp]" => "timestamp"}
    }
    mutate {
        gsub => [
            "timestamp", "T", ""
        ]
    }
    date {
        match => ["timestamp", "YYYYMMddHHmmssZ"]
        timezone => "UTC"
        target => "@timestamp"
    }
    mutate {
        remove_field => "timestamp"
    }
    mutate {
        add_field => [ "index_name", "%{[MTE][test_station]}" ]
    }
    mutate {
        lowercase => "index_name"
    }
    mutate {
        add_field => [ "serial_number", "%{[MTE][serial_number]}" ]
    }
    mutate {
        add_field => [ "test_pass", "%{[MTE][test_pass]}" ]
    }
    mutate {
        add_field => [ "error_code", "%{[MTE][error_code]}" ]
    }
}   
output {
    elasticsearch {
        hosts => "elasticsearch:9200"
    manage_template => true                 
        index => "%{[MTE][serial_number]}"
    template => "/usr/share/logstash/logstash-template.json"
    template_name => "mte_dynamic_template"
    template_overwrite => true
    }
    jdbc {
        connection_string => 'jdbc:postgresql://postgres:5432/postgres'
        username => 'asdf'
        password => 'asdf'
        statement => [ "INSERT INTO mfg_db (timestamp, test_name, serial_number, test_pass, error_code) VALUES(CAST (? as timestamp), ?, ?, ?, ?)",
                                     "%{@timestamp}", "%{index_name}", "%{serial_number}" , "%{test_pass}", "%{error_code}"]
    }   
}

As for now I can live with this by manually adding fields in the future but I feel like there might be something here. Let me know if youd like me to try anything else.

theangryangel commented 5 years ago

Can you add a redacted copy of an input event so i can have a play at some point this weekend, if I get a moment? Just to make sure I’ll definitely be using the same structure

john-michaelburke commented 5 years ago

https://drive.google.com/file/d/14qZf96R838mLocHMkqxoBQEwChyBx1pU/view?usp=sharing