logstash-plugins / logstash-filter-jdbc_static

Loads a DB query's result set in memory and uses it as lookup table for events.
Apache License 2.0
11 stars 12 forks source link

remote data not refreshed #24

Closed vvoody closed 6 years ago

vvoody commented 6 years ago

Hi, here is my config and using Logstash 6.2.4 with jdbc_static v1.0.1.

        loaders => [
            {
                id => "lb-metadata"
                query => "select lb_ip, lb_fqdn, lb_region, lb_tier from lb_metadata order by lb_ip"
                local_table => "lbmetadata_fetched"
                loader_schedule => "*/5 * * * *"
            }
        ]

        local_db_objects => [
            {
                name => "lbmetadata_fetched"
                index_columns => ["lb_ip", "lb_fqdn", "lb_region", "lb_tier"]
                columns => [
                    ["lb_ip", "char(16)"],
                    ["lb_fqdn", "varchar(50)"],
                    ["lb_region", "char(12)"],
                    ["lb_tier", "char(16)"]
                ]
            }
        ]

        local_lookups => [
            {
                id => "local-lbmetadata"
                query => "select lb_ip, lb_fqdn, lb_region, lb_tier from lbmetadata_fetched WHERE lb_ip = :lb_ip"
                parameters => {lb_ip => "[host]"}
                target => "lbmeta"
            }
        ]

        # using add_field here to add & rename values to the event root
        add_field => { lb_ip => "%{[lbmeta][0][lb_ip]}" }
        add_field => { lb_fqdn => "%{[lbmeta][0][lb_fqdn]}" }
        add_field => { lb_region => "%{[lbmeta][0][lb_region]}" }
        add_field => { lb_tier => "%{[lbmeta][0][lb_tier]}" }

And we have a table with following 3 rows.

id lb_ip lb_fqdn lb_region lb_tier
1 172.17.0.1 shalb001.corp.xxx.com SHA dev
2 172.17.0.2 dkrlb002.qa.xxx.com DKR web
3 10.1.2.3 xyzlb003.xyz.xxx.com XYZ test

Then I modified line 3 in database, changed its lb_tier from test to prod. However, event did update filed lb_tier to prod for new incoming log after 5 mintues.

Am I doing something wrong? How do jdbc_static update loader's data? Thanks.

vvoody commented 6 years ago

Oh, does v1.0.2 fix this issue?

vvoody commented 6 years ago

I am able to see my data loaded at bootstrap, however after 5 minutes or longer, no more output about jdbc_static reloading my database from remote database.

[2018-05-11T08:26:29,275][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 3 records in: 0.426 seconds
[2018-05-11T08:26:29,313][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.034 seconds
[2018-05-11T08:26:29,699][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.382 seconds
guyboertje commented 6 years ago

@vvoody Yes 1.0.2 does fix the scheduling. Also, loader_schedule => "*/5 * * * *" is a plugin wide setting - it should be at the indentation level as loaders and lookups.

Please upgrade to 1.0.2 and report back - it would be great to get feedback from the field that scheduling works as expected.

vvoody commented 6 years ago

Thanks @guyboertje. It works now 👍

Since latest Logstash (6.2.4) I'm using ships logstash-filter-jdbc_static 1.0.1, so I upgraded to 1.0.2 via docker exec to Logstash container.

0. prepare config files.

Config files under ./config/ were copied from default, only logstash.yml was changed to point to my Elasticsearch server IP.

Note: loader_schedule is under per loader not same indentation level as loaders.

$
$ ls ./config/
jvm.options
log4j2.properties
logstash-filter-jdbc_static-1.0.2.gem
logstash.yml
mysql-connector-java-5.1.46-bin.jar
mysql-connector-java-5.1.46.zip
pipelines.yml
startup.options
$
$
$ ls ./pipeline/
logstash.conf
$
$
$ cat ./pipeline/logstash.conf
input {
    syslog {
        port => 5044
    }
}

filter {
    jdbc_static {
        loaders => [
            {
                id => "lb-metadata"
                query => "select lb_ip, lb_fqdn, lb_region, lb_tier from lb_metadata order by lb_ip"
                local_table => "lbmetadata_fetched"
                loader_schedule => "*/5 * * * *"
            }
        ]
        local_db_objects => [
            {
                name => "lbmetadata_fetched"
                index_columns => ["lb_ip", "lb_fqdn", "lb_region", "lb_tier"]
                columns => [
                    ["lb_ip", "char(16)"],
                    ["lb_fqdn", "varchar(50)"],
                    ["lb_region", "char(12)"],
                    ["lb_tier", "char(16)"]
                ]
            }
        ]
        local_lookups => [
            {
                id => "local-lbmetadata"
                query => "select lb_ip, lb_fqdn, lb_region, lb_tier from lbmetadata_fetched WHERE lb_ip = :lb_ip"
                parameters => {lb_ip => "[host]"}
                target => "lbmeta"
            }
        ]
        add_field => { lb_ip => "%{[lbmeta][0][lb_ip]}" }
        add_field => { lb_fqdn => "%{[lbmeta][0][lb_fqdn]}" }
        add_field => { lb_region => "%{[lbmeta][0][lb_region]}" }
        add_field => { lb_tier => "%{[lbmeta][0][lb_tier]}" }
        remove_field => ["lbmeta"]
        jdbc_user => "xxx"
        jdbc_password => "xxx"
        jdbc_driver_class => "com.mysql.jdbc.Driver"
        jdbc_driver_library => "/usr/share/logstash/config/mysql-connector-java-5.1.46-bin.jar"
        jdbc_connection_string => "jdbc:mysql://10.2.2.2:3306/xxx"
    }
}

output {
    elasticsearch {
        hosts => [ "10.1.2.3:9200" ]
    }
}
$

1. start Logstash 6.2.4

$ docker run --name logstash -v ${PWD}/config:/usr/share/logstash/config/ -v ${PWD}/pipeline:/usr/share/logstash/pipeline/ -p 5044:5044 -p 5044:5044/udp -p 9600:9600  -d docker.elastic.co/logstash/logstash:6.2.4
$
$ docker logs -f logstash

Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2018-05-15T06:02:56,838][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-05-15T06:02:56,857][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-05-15T06:02:58,022][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"arcsight", :directory=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.4-java/modules/arcsight/configuration"}
[2018-05-15T06:02:58,190][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2018-05-15T06:02:58,195][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2018-05-15T06:02:58,792][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"9af6973d-2b63-4ddc-add3-6bab18fb4173", :path=>"/usr/share/logstash/data/uuid"}
[2018-05-15T06:02:59,953][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.2.4"}
[2018-05-15T06:03:00,335][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-05-15T06:03:02,507][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[http://10.1.2.3:9200], bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", manage_template=>false, document_type=>"%{[@metadata][document_type]}", sniffing=>false, id=>"ce13584adf4ad12eec2d82602f93961d5312edddb1dbfc6d19fd577732c6361e", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_21d010d5-7a4c-4c91-a9df-b15581c74bff", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-05-15T06:03:02,620][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50}
[2018-05-15T06:03:03,260][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.1.2.3:9200/]}}
[2018-05-15T06:03:03,272][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.1.2.3:9200/, :path=>"/"}
[2018-05-15T06:03:03,542][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://10.1.2.3:9200/"}
[2018-05-15T06:03:03,655][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-05-15T06:03:03,661][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-05-15T06:03:03,695][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://10.1.2.3:9200"]}
[2018-05-15T06:03:03,868][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.1.2.3:9200/]}}
[2018-05-15T06:03:03,869][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.1.2.3:9200/, :path=>"/"}
[2018-05-15T06:03:03,875][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://10.1.2.3:9200/"}
[2018-05-15T06:03:03,881][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>6}
[2018-05-15T06:03:03,881][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-05-15T06:03:04,047][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0xeefbe9d run>"}
[2018-05-15T06:03:06,418][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>24, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-05-15T06:03:06,439][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.1.2.3:9200/]}}
[2018-05-15T06:03:06,440][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.1.2.3:9200/, :path=>"/"}
[2018-05-15T06:03:06,444][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://10.1.2.3:9200/"}
[2018-05-15T06:03:06,448][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-05-15T06:03:06,449][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-05-15T06:03:06,452][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-05-15T06:03:06,476][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-05-15T06:03:06,495][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.1.2.3:9200"]}
[2018-05-15T06:03:10,257][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x75662cf8 sleep>"}
[2018-05-15T06:03:10,275][INFO ][logstash.inputs.syslog   ] Starting syslog tcp listener {:address=>"0.0.0.0:5044"}
[2018-05-15T06:03:10,285][INFO ][logstash.inputs.syslog   ] Starting syslog udp listener {:address=>"0.0.0.0:5044"}
[2018-05-15T06:03:10,335][INFO ][logstash.agent           ] Pipelines running {:count=>2, :pipelines=>[".monitoring-logstash", "main"]}
[2018-05-15T06:03:10,355][INFO ][logstash.inputs.metrics  ] Monitoring License OK

NOTE: database data was not loaded

2. upgrade logstash-filter-jdbc_static to 1.0.2

$ docker exec -it logstash bash
bash-4.2$ logstash -V
logstash 6.2.4
bash-4.2$ logstash-plugin list --verbose logstash-filter-jdbc_static
logstash-filter-jdbc_static (1.0.1)
bash-4.2$
bash-4.2$ logstash-plugin install config/logstash-filter-jdbc_static-1.0.2.gem
Validating config/logstash-filter-jdbc_static-1.0.2.gem
Installing logstash-filter-jdbc_static
Installation successful
bash-4.2$
bash-4.2$ logstash-plugin list --verbose logstash-filter-jdbc_static
logstash-filter-jdbc_static (1.0.2)

3. restart Logstash to take effect

$ docker restart logstash
$ docker logs -f logstash
[2018-05-15T06:25:38,756][WARN ][logstash.runner          ] SIGTERM received. Shutting down.
[2018-05-15T06:25:40,142][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0xeefbe9d run>"}
[2018-05-15T06:25:40,746][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x75662cf8 run>"}
Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2018-05-15T06:26:04,569][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-05-15T06:26:04,592][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-05-15T06:26:05,648][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"arcsight", :directory=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.4-java/modules/arcsight/configuration"}
[2018-05-15T06:26:07,624][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.2.4"}
[2018-05-15T06:26:08,037][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-05-15T06:26:10,108][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[http://10.1.2.3:9200], bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", manage_template=>false, document_type=>"%{[@metadata][document_type]}", sniffing=>false, id=>"ce13584adf4ad12eec2d82602f93961d5312edddb1dbfc6d19fd577732c6361e", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_a6c18c76-6525-4dc4-abb0-8592921754fa", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-05-15T06:26:10,209][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50}
[2018-05-15T06:26:10,803][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.1.2.3:9200/]}}
[2018-05-15T06:26:10,814][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.1.2.3:9200/, :path=>"/"}
[2018-05-15T06:26:11,046][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://10.1.2.3:9200/"}
[2018-05-15T06:26:11,135][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-05-15T06:26:11,139][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-05-15T06:26:11,171][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://10.1.2.3:9200"]}
[2018-05-15T06:26:11,320][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.1.2.3:9200/]}}
[2018-05-15T06:26:11,321][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.1.2.3:9200/, :path=>"/"}
[2018-05-15T06:26:11,326][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://10.1.2.3:9200/"}
[2018-05-15T06:26:11,332][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>6}
[2018-05-15T06:26:11,332][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-05-15T06:26:11,497][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x31da987e run>"}
[2018-05-15T06:26:13,775][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>24, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-05-15T06:26:13,808][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.1.2.3:9200/]}}
[2018-05-15T06:26:13,809][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.1.2.3:9200/, :path=>"/"}
[2018-05-15T06:26:13,815][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://10.1.2.3:9200/"}
[2018-05-15T06:26:13,822][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-05-15T06:26:13,822][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-05-15T06:26:13,827][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-05-15T06:26:13,855][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-05-15T06:26:13,874][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.1.2.3:9200"]}
[2018-05-15T06:26:13,889][INFO ][logstash.filters.jdbcstatic] derby.system.home is: /usr/share/logstash
[2018-05-15T06:26:16,625][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 4 records in: 0.356 seconds
[2018-05-15T06:26:16,664][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.035 seconds
[2018-05-15T06:26:17,028][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.361 seconds
[2018-05-15T06:26:17,370][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x26b3744b sleep>"}
[2018-05-15T06:26:17,383][INFO ][logstash.inputs.syslog   ] Starting syslog tcp listener {:address=>"0.0.0.0:5044"}
[2018-05-15T06:26:17,391][INFO ][logstash.inputs.syslog   ] Starting syslog udp listener {:address=>"0.0.0.0:5044"}
[2018-05-15T06:26:17,444][INFO ][logstash.agent           ] Pipelines running {:count=>2, :pipelines=>[".monitoring-logstash", "main"]}
[2018-05-15T06:26:17,460][INFO ][logstash.inputs.metrics  ] Monitoring License OK

Note: check last 8th to last 6th lines, data was loaded

4. move loader_schedule to indentation level as loaders and lookups & restart Logstash

$ grep -C 5 loader_schedule pipeline/logstash.conf
        loaders => [
            {
                id => "lb-metadata"
                query => "select lb_ip, lb_fqdn, lb_region, lb_tier from lb_metadata order by lb_ip"
                local_table => "lbmetadata_fetched"
                #loader_schedule => "*/5 * * * *"
            }
        ]

        local_db_objects => [
            {
--
        add_field => { lb_ip => "%{[lbmeta][0][lb_ip]}" }
        add_field => { lb_fqdn => "%{[lbmeta][0][lb_fqdn]}" }
        add_field => { lb_region => "%{[lbmeta][0][lb_region]}" }
        add_field => { lb_tier => "%{[lbmeta][0][lb_tier]}" }
        remove_field => ["lbmeta"]
        loader_schedule => "*/5 * * * *"
        jdbc_user => "xxxx"
        jdbc_password => "xxxx"
        jdbc_driver_class => "com.mysql.jdbc.Driver"
        jdbc_driver_library => "/usr/share/logstash/config/mysql-connector-java-5.1.46-bin.jar"
        jdbc_connection_string => "jdbc:mysql://10.2.2.2:3306/xxxx"
$
$ docker restart logstash
$ docker logs -f logstash
[2018-05-15T06:34:58,423][INFO ][logstash.pipeline        ] Pipeline has terminated {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x40ed9b7f run>"}

Sending Logstash's logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2018-05-15T06:37:09,160][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"/usr/share/logstash/modules/fb_apache/configuration"}
[2018-05-15T06:37:09,181][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"/usr/share/logstash/modules/netflow/configuration"}
[2018-05-15T06:37:10,316][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"arcsight", :directory=>"/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/x-pack-6.2.4-java/modules/arcsight/configuration"}
[2018-05-15T06:37:12,192][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.2.4"}
[2018-05-15T06:37:12,586][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-05-15T06:37:14,684][WARN ][logstash.outputs.elasticsearch] You are using a deprecated config setting "document_type" set in elasticsearch. Deprecated settings will continue to work, but are scheduled for removal from logstash in the future. Document types are being deprecated in Elasticsearch 6.0, and removed entirely in 7.0. You should avoid this feature If you have any questions about this, please visit the #logstash channel on freenode irc. {:name=>"document_type", :plugin=><LogStash::Outputs::ElasticSearch hosts=>[http://10.1.2.3:9200], bulk_path=>"/_xpack/monitoring/_bulk?system_id=logstash&system_api_version=2&interval=1s", manage_template=>false, document_type=>"%{[@metadata][document_type]}", sniffing=>false, id=>"ce13584adf4ad12eec2d82602f93961d5312edddb1dbfc6d19fd577732c6361e", enable_metric=>true, codec=><LogStash::Codecs::Plain id=>"plain_979ddc75-437c-441d-b640-799673b1ff41", enable_metric=>true, charset=>"UTF-8">, workers=>1, template_name=>"logstash", template_overwrite=>false, doc_as_upsert=>false, script_type=>"inline", script_lang=>"painless", script_var_name=>"event", scripted_upsert=>false, retry_initial_interval=>2, retry_max_interval=>64, retry_on_conflict=>1, action=>"index", ssl_certificate_verification=>true, sniffing_delay=>5, timeout=>60, pool_max=>1000, pool_max_per_route=>100, resurrect_delay=>5, validate_after_inactivity=>10000, http_compression=>false>}
[2018-05-15T06:37:14,784][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>".monitoring-logstash", "pipeline.workers"=>1, "pipeline.batch.size"=>2, "pipeline.batch.delay"=>50}
[2018-05-15T06:37:15,390][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.1.2.3:9200/]}}
[2018-05-15T06:37:15,402][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.1.2.3:9200/, :path=>"/"}
[2018-05-15T06:37:15,693][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://10.1.2.3:9200/"}
[2018-05-15T06:37:15,764][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-05-15T06:37:15,768][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-05-15T06:37:15,802][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["http://10.1.2.3:9200"]}
[2018-05-15T06:37:15,972][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.1.2.3:9200/]}}
[2018-05-15T06:37:15,973][INFO ][logstash.licensechecker.licensereader] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.1.2.3:9200/, :path=>"/"}
[2018-05-15T06:37:15,979][WARN ][logstash.licensechecker.licensereader] Restored connection to ES instance {:url=>"http://10.1.2.3:9200/"}
[2018-05-15T06:37:15,985][INFO ][logstash.licensechecker.licensereader] ES Output version determined {:es_version=>6}
[2018-05-15T06:37:15,985][WARN ][logstash.licensechecker.licensereader] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-05-15T06:37:16,162][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>".monitoring-logstash", :thread=>"#<Thread:0x51df8e58 run>"}
[2018-05-15T06:37:18,744][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>24, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-05-15T06:37:18,768][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.1.2.3:9200/]}}
[2018-05-15T06:37:18,769][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.1.2.3:9200/, :path=>"/"}
[2018-05-15T06:37:18,774][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://10.1.2.3:9200/"}
[2018-05-15T06:37:18,779][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-05-15T06:37:18,779][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-05-15T06:37:18,783][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-05-15T06:37:18,807][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-05-15T06:37:18,830][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.1.2.3:9200"]}
[2018-05-15T06:37:18,844][INFO ][logstash.filters.jdbcstatic] derby.system.home is: /usr/share/logstash
[2018-05-15T06:37:21,842][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 4 records in: 0.479 seconds
[2018-05-15T06:37:21,892][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.044 seconds
[2018-05-15T06:37:22,345][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.449 seconds
[2018-05-15T06:37:22,370][INFO ][logstash.filters.jdbcstatic] Scheduler operations: Scheduled for: these minutes in the hour [0, 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 55];these seconds in the minute [0]
[2018-05-15T06:37:22,373][INFO ][logstash.filters.jdbcstatic] Scheduler scan for work frequency is: 2.5
[2018-05-15T06:37:25,214][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x4b6def64 sleep>"}
[2018-05-15T06:37:25,244][INFO ][logstash.inputs.syslog   ] Starting syslog tcp listener {:address=>"0.0.0.0:5044"}
[2018-05-15T06:37:25,248][INFO ][logstash.inputs.syslog   ] Starting syslog udp listener {:address=>"0.0.0.0:5044"}
[2018-05-15T06:37:25,304][INFO ][logstash.agent           ] Pipelines running {:count=>2, :pipelines=>[".monitoring-logstash", "main"]}
[2018-05-15T06:37:25,326][INFO ][logstash.inputs.metrics  ] Monitoring License OK
[2018-05-15T06:40:00,287][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 4 records in: 0.069 seconds
[2018-05-15T06:40:00,297][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.009 seconds
[2018-05-15T06:40:00,360][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.062 seconds
$

**Works!***

I changed to reload every 2 minutes, works like a charm.

[2018-05-15T06:41:12,275][INFO ][logstash.filters.jdbcstatic] derby.system.home is: /usr/share/logstash
[2018-05-15T06:41:14,973][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 4 records in: 0.414 seconds
[2018-05-15T06:41:15,026][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.049 seconds
[2018-05-15T06:41:15,409][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.379 seconds
[2018-05-15T06:41:15,432][INFO ][logstash.filters.jdbcstatic] Scheduler operations: Scheduled for: these minutes in the hour [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58];these seconds in the minute [0]
[2018-05-15T06:41:15,434][INFO ][logstash.filters.jdbcstatic] Scheduler scan for work frequency is: 2.5
[2018-05-15T06:41:19,857][INFO ][logstash.inputs.metrics  ] Monitoring License OK
[2018-05-15T06:41:20,005][ERROR][logstash.inputs.metrics  ] Failed to create monitoring event {:message=>"undefined method `ephemeral_id' for nil:NilClass", :error=>"NoMethodError"}
[2018-05-15T06:41:20,773][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x39e6cba5 sleep>"}
[2018-05-15T06:41:20,789][INFO ][logstash.inputs.syslog   ] Starting syslog tcp listener {:address=>"0.0.0.0:5044"}
[2018-05-15T06:41:20,799][INFO ][logstash.inputs.syslog   ] Starting syslog udp listener {:address=>"0.0.0.0:5044"}
[2018-05-15T06:41:20,859][INFO ][logstash.agent           ] Pipelines running {:count=>2, :pipelines=>[".monitoring-logstash", "main"]}
[2018-05-15T06:42:00,750][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 4 records in: 0.059 seconds
[2018-05-15T06:42:00,755][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.004 seconds
[2018-05-15T06:42:00,806][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.05 seconds
[2018-05-15T06:44:00,773][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 4 records in: 0.063 seconds
[2018-05-15T06:44:00,778][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.004 seconds
[2018-05-15T06:44:00,815][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.036 seconds
[2018-05-15T06:46:00,806][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 4 records in: 0.066 seconds
[2018-05-15T06:46:00,811][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.005 seconds
[2018-05-15T06:46:00,844][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.032 seconds
[2018-05-15T06:48:00,854][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 4 records in: 0.08 seconds
[2018-05-15T06:48:00,864][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.009 seconds
[2018-05-15T06:48:00,896][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.032 seconds
[2018-05-15T06:50:00,867][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 4 records in: 0.068 seconds
[2018-05-15T06:50:00,872][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.005 seconds
[2018-05-15T06:50:00,900][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.027 seconds
[2018-05-15T06:52:00,899][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 4 records in: 0.067 seconds
[2018-05-15T06:52:00,905][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.006 seconds
[2018-05-15T06:52:00,932][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.027 seconds
[2018-05-15T06:54:00,900][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 4 records in: 0.038 seconds
[2018-05-15T06:54:00,907][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.006 seconds
[2018-05-15T06:54:00,941][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.034 seconds
[2018-05-15T06:56:00,920][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 4 records in: 0.029 seconds
[2018-05-15T06:56:00,925][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.004 seconds
[2018-05-15T06:56:00,953][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.028 seconds
[2018-05-15T06:58:00,948][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 4 records in: 0.029 seconds
[2018-05-15T06:58:00,954][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.006 seconds
[2018-05-15T06:58:00,982][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.028 seconds
[2018-05-15T07:00:00,970][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 4 records in: 0.023 seconds
[2018-05-15T07:00:00,978][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.007 seconds
[2018-05-15T07:00:01,016][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.038 seconds
[2018-05-15T07:02:00,996][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 4 records in: 0.022 seconds
[2018-05-15T07:02:01,000][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.004 seconds
[2018-05-15T07:02:01,030][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.03 seconds
[2018-05-15T07:04:01,058][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 4 records in: 0.056 seconds
[2018-05-15T07:04:01,062][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.003 seconds
[2018-05-15T07:04:01,088][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.027 seconds
[2018-05-15T07:06:01,062][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, fetched 4 records in: 0.023 seconds
[2018-05-15T07:06:01,067][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, saved fetched records to import file in: 0.004 seconds
[2018-05-15T07:06:01,091][INFO ][logstash.filters.jdbc.readwritedatabase] loader lb-metadata, imported all fetched records in: 0.024 seconds
guyboertje commented 6 years ago

Great. I'll close this.