elastic / logstash

Logstash - transport and process your logs, events, or other data
https://www.elastic.co/products/logstash
Other
14.22k stars 3.5k forks source link

Logstash 2.4.1/5.0.x: Environments Variables parsed as Lists/Arrays for Logstash Configuration #6366

Closed berglh closed 4 years ago

berglh commented 7 years ago

I've been replacing the use of logstash-filter-environment plugin with the new feature: Using Environment Variables in the Configuration.

I initially incorrectly diagnosed and reported the issue here ES hosts array: Support for parsing Environment Variables. @jordansissel advised I should file an issue in the correct project for this particular feature.

One of the things I am attempting to do is to supply an array for configuration item such as the hosts array in the logstash-output-elasticsearch plugin. This is useful when the environment hostnames and IPs are ephemeral and the use of a service discovery via query of etcd to determine IP and ports of online elasticsearch nodes during logstash startup is useful.

I currently achieve this using sed to replace a placeholder string in the config on the docker container startup. It's not a major issue, but considering environment parsing functionality is built into Logstash now, it'd be great to leverage this instead of hacking the config prior to launch.

Sending Logstash's logs to /home/somebody/logstash-5.0.2/logs which is now configured via log4j2.properties The stdin plugin is now waiting for input: [2016-12-06T16:07:20,738][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://localhost:9200"]}} [2016-12-06T16:07:20,740][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil} [2016-12-06T16:07:20,781][WARN ][logstash.outputs.elasticsearch] Marking url as dead. {:reason=>"Elasticsearch Unreachable: [http://localhost:9200][Manticore::SocketException] Connection refused (Connection refused)", :url=>#<URI::HTTP:0x41764dd8 URL:http://localhost:9200>, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}

- Sample Data
  - With a host array:

user@computer:~/logstash-5.0.2$ export ES_HOSTS="\"localhost:9200\", \"host2:9200\", \"host3:9200\""

user@computer:~/logstash-5.0.2$ echo $ES_HOSTS "localhost:9200", "host2:9200", "host3:9200"

user@computer:~/logstash-5.0.2$ bin/logstash -e 'output { elasticsearch { hosts => [ "${ES_HOSTS}" ] } }' Sending Logstash's logs to ~/logstash-5.0.2/logs which is now configured via log4j2.properties The stdin plugin is now waiting for input: [2016-12-06T16:17:02,070][ERROR][logstash.agent ] Pipeline aborted due to error {:exception=>#<LogStash::ConfigurationError: Host '"localhost:9200", "host2:9200", "host3:9200"' was specified, but is not valid! Use either a full URL or a hostname:port string!>, :backtrace=>["~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:183:in host_to_url'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:93:inbuild_pool'", "org/jruby/RubyArray.java:2414:in map'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:93:inbuild_pool'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:20:in initialize'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:53:inbuild'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch.rb:188:in build_client'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/common.rb:13:inregister'", "~/logstash-5.0.2/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:8:in register'", "~/logstash-5.0.2/logstash-core/lib/logstash/output_delegator.rb:37:inregister'", "~/logstash-5.0.2/logstash-core/lib/logstash/pipeline.rb:196:in start_workers'", "org/jruby/RubyArray.java:1613:ineach'", "~/logstash-5.0.2/logstash-core/lib/logstash/pipeline.rb:196:in start_workers'", "~/logstash-5.0.2/logstash-core/lib/logstash/pipeline.rb:153:inrun'", "~/logstash-5.0.2/logstash-core/lib/logstash/agent.rb:250:in `start_pipeline'"]} [2016-12-06T16:17:02,086][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} [2016-12-06T16:17:05,074][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}

user@computer:~/logstash-5.0.2$ bin/logstash -e 'output { elasticsearch { hosts => [ "localhost:9200", "host2:9200", "host3:9200" ] } }' Sending Logstash's logs to ~/logstash-5.0.2/logs which is now configured via log4j2.properties The stdin plugin is now waiting for input: [2016-12-06T16:17:21,387][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>["http://localhost:9200", "http://host2:9200", "http://host3:9200"]}} [2016-12-06T16:17:21,389][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil} [2016-12-06T16:17:21,444][WARN ][logstash.outputs.elasticsearch] Marking url as dead. {:reason=>"Elasticsearch Unreachable: [http://localhost:9200][Manticore::SocketException] Connection refused (Connection refused)", :url=>#<URI::HTTP:0x51dd94f4 URL:http://localhost:9200>, :error_message=>"Elasticsearch Unreachable: [http://localhost:9200][Manticore::SocketException] Connection refused (Connection refused)", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"} [2016-12-06T16:17:21,445][ERROR][logstash.outputs.elasticsearch] Failed to install template. {:message=>"Elasticsearch Unreachable: [http://localhost:9200][Manticore::SocketException] Connection refused (Connection refused)", :class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"} [2016-12-06T16:17:21,446][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["localhost:9200", "host2:9200", "host3:9200"]} [2016-12-06T16:17:21,447][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1000} [2016-12-06T16:17:21,449][INFO ][logstash.pipeline ] Pipeline main started [2016-12-06T16:17:21,467][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600} [2016-12-06T16:17:26,390][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:url=>#<URI::HTTP:0x588852e4 URL:http://localhost:9200>, :healthcheck_path=>"/"} ^C[2016-12-06T16:17:29,147][WARN ][logstash.runner ] SIGINT received. Shutting down the agent. [2016-12-06T16:17:29,151][WARN ][logstash.agent ] stopping pipeline {:id=>"main"}

untergeek commented 7 years ago

Just for the sake of hare-brained ideas...

What happens if instead of

$ export ES_HOSTS="\"localhost:9200\", \"host2:9200\", \"host3:9200\""

you instead did

$ export ES_HOSTS='["localhost:9200", "host2:9200", "host3:9200"]'

(of course, you could just escape the brackets instead of using single quotes)

Then you see (and I just verified this in a terminal):

$ echo $ES_HOSTS
["localhost:9200", "host2:9200", "host3:9200"]

In theory, this may let you do:

output {
  elasticsearch {
    hosts => ${ES_HOSTS}
  }
}

I know that this works in Curator. It may or may not work in Logstash. I figure it's worth a try, though. I think that the ENV variables are parsed first, into strings, which would potentially make the square brackets work when the config is parsed (after env var expansion).

berglh commented 7 years ago

@untergeek That's a great idea; I tried it as per your suggestion, but still no dice. I also tried with escaping the square brackets but in the config it included the literal string. Host '\["localhost:9200", "host2:9200", "host3:9200"\]' was specified,

logstash-5.0.2$ ES_HOSTS='["localhost:9200", "host2:9200", "host3:9200"]' ./bin/logstash -e 'output { elasticsearch { hosts => "${ES_HOSTS}" } }'
Sending Logstash's logs to /home/uqblloy2/log-shipment/logstash-5.0.2/logs which is now configured via log4j2.properties
The stdin plugin is now waiting for input:
[2016-12-07T10:27:38,043][ERROR][logstash.agent           ] Pipeline aborted due to error {:exception=>#<LogStash::ConfigurationError: Host '["localhost:9200", "host2:9200", "host3:9200"]' was specified, but is not valid! Use either a full URL or a hostname:port string!>, :backtrace=>["~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:183:in `host_to_url'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:93:in `build_pool'", "org/jruby/RubyArray.java:2414:in `map'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:93:in `build_pool'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client.rb:20:in `initialize'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/http_client_builder.rb:53:in `build'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch.rb:188:in `build_client'", "~/logstash-5.0.2/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-5.2.1-java/lib/logstash/outputs/elasticsearch/common.rb:13:in `register'", "~/logstash-5.0.2/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:8:in `register'", "~/logstash-5.0.2/logstash-core/lib/logstash/output_delegator.rb:37:in `register'", "~/logstash-5.0.2/logstash-core/lib/logstash/pipeline.rb:196:in `start_workers'", "org/jruby/RubyArray.java:1613:in `each'", "~/logstash-5.0.2/logstash-core/lib/logstash/pipeline.rb:196:in `start_workers'", "~/logstash-5.0.2/logstash-core/lib/logstash/pipeline.rb:153:in `run'", "~/logstash-5.0.2/logstash-core/lib/logstash/agent.rb:250:in `start_pipeline'"]}
[2016-12-07T10:27:38,067][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2016-12-07T10:27:41,051][WARN ][logstash.agent           ] stopping pipeline {:id=>"main"}
untergeek commented 7 years ago

What if you omit the quotes around "${ES_HOSTS}" and make it just hosts => ${ES_HOSTS}?

untergeek commented 7 years ago

Maybe you did, and it made it a string anyway

jordansissel commented 7 years ago

I am thinking of how to do this.

In terms of implementation, when we do validation is possibly where we could support different data types from env.

For example, if a plugin has declares a setting to be a list of strings, but provides some kind of env reference instead, we could define this to take the Envirionment variable string (env is a map of string:string) and split on comma.

If I try to imagine the code, I kind of want solve it by adding a new value type in the config grammar that accepts this env ${blah} syntax and the validation/setup would do the right thing to accept and validate that.

On Tue, Dec 6, 2016 at 7:14 PM Aaron Mildenstein notifications@github.com wrote:

Maybe you did, and it made it a string anyway

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/elastic/logstash/issues/6366#issuecomment-265343903, or mute the thread https://github.com/notifications/unsubscribe-auth/AAIC6oRx89o-lCKwGw2BCUfs0jekaYYxks5rFiSWgaJpZM4LGCcX .

berglh commented 7 years ago

@untergeek

What if you omit the quotes around "${ES_HOSTS}" and make it just hosts => ${ES_HOSTS}?

The configuration parser complains because it expects any value to be wrapped in double quotes.

untergeek commented 7 years ago

Bummer. That would do it.

-- Aaron Mildenstein

On December 6, 2016 at 9:36:25 PM, Berg Lloyd-Haig (notifications@github.com) wrote:

@untergeek https://github.com/untergeek

What if you omit the quotes around "${ES_HOSTS}" and make it just hosts => ${ES_HOSTS}?

The configuration parser complains because it expects any value to be wrapped in double quotes.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/elastic/logstash/issues/6366#issuecomment-265353784, or mute the thread https://github.com/notifications/unsubscribe-auth/AA-R31X2PLbY5dEs15EbH2rwnzlsRdo4ks5rFjfJgaJpZM4LGCcX .

alesnav commented 7 years ago

Hello,

I am facing this problem, too. In my case, I'm trying to pass all kafka topics using an array as env variable.

I tried all options defined above this post and "${KAFKA_TOPICS[@]}", too; but without success.

Thanks!

patrickwallaws commented 7 years ago

Any updates on this? This is really harshing my time....trying to pass the list as an environment variable to my ecs cluster of logstash servers.

If no updates on the feature - anyone find a good work around (other than using a single elasticsaerch host ...)

fbaligand commented 7 years ago

In issue #6665, 6 months ago, I proposed a way to do it, which is simple to implement and that I'm ready to implement (as I implemented first Logstash env var injection) :

What do you think about ?

nick-george commented 6 years ago

To whoever ends up implementing this, it would be fantastic if you could please also make it possible to pass in a nil value? (as opposed to an empty string)?

It's a long story, but being able to pass in a nil would allow me to get around this issue https://github.com/logstash-plugins/logstash-input-beats/issues/196.

Thanks! Nick

jordansissel commented 6 years ago

Please do not send "+1" comments with no other content. This generates an ton of email for everyone.

I have deleted all prior +1 comments. At the time of deletion, there were 4.

If you feel compelled to "+1" something, please use Github issue reactions instead: image

huyqut commented 6 years ago

Any updates on this issue yet?

jordanhenderson commented 6 years ago

@fbaligand any updates on this? We really need this feature to keep our pipeline config clean. Also, unsure from the title whether this applies to logstash 6.x too? I'm hoping so 👍

fbaligand commented 6 years ago

Hi @jordanhenderson

Well, 1 year ago, I was ready to make a PR to implement the feature as explained in my previous comment. But @jordansissel (Logstash creator) explained in the 2 following comments the way to implement the feature. And since it requires to change Logstash grammar, I don't know how to implement it. That's why I finally didn't make a PR.

https://github.com/elastic/logstash/issues/6665#issuecomment-281496930 https://github.com/elastic/logstash/issues/6665#issuecomment-282147424

jordansissel commented 6 years ago

If we focus on just lists, we could probably make the proposed syntax possible (thing => ${SOME_ENV_VAR}) to accept comma-delimited string values as a list, but this assumes that a single value (of that list) won't need to include a comma.

For setting multiple hosts, for example, something like:

export MY_HOSTS=foo,bar,baz

and in Logstash:

# ...
hosts => ${MY_HOSTS}

But what if the value would have a comma itself, such as passing a list to the date filter where one date pattern uses european-style decimal (with a comma), such as "YYYY-MM-DD HH:mm:ss,SSS" -- should users expect to need to escape the comma? What feedback can we provide?

In the above date example, what's the expected interpretation:

# Match both US and EU-style decimal separators
export FORMATS="YYYY-MM-DD HH:mm:ss.SSS,YYYY-MM-DD HH:mm:ss,SSS"
filter {
  date {
    match => [ "time", $FORMATS ]
  }
}
jordansissel commented 6 years ago

@jsvd What do you think? I struggle with finding a syntax that I think will satisfy what I think most users are wanting without creating a bunch of frustrating edge cases or alien syntax.

nick-george commented 6 years ago

IMHO, using environment variables is a little clunky because we can't use:

Could there be an alternative to using environment variables? Such as:

I'm guessing any such mechanism would require a change to the Logstash DSL. However, I feel this would give LS config authors the best flexibility in the long term.

Cheers, Nick

jordanhenderson commented 6 years ago

I am running multiple instances of logstash in a docker swarm. Environment variables are an easy way for us to configure each different instance of logstash via our CI process, rather than requiring a persistent service. These variables are dynamically switched between at deploy time, so they can be hardcoded in separate files in VCS, and then switched between depending on branch.

If we could put per environment configuration within logstash.yml, this may work... however I think adding a separate memcached/elastic instance for this will be overly complex. Perhaps something like 'logstash.dev.yml' and 'logstash.prod.yml' etc would do the job here, as we would be able to define per environment config overrides in any format, as long as logstash will look for/consume these files (preferrably by merging the base logstash.yml and logstash.env.yml).

jordanhenderson commented 6 years ago

IE the only environment variable needed would be a simple string (LOGSTASH_ENV=dev, LOGSTASH_ENV=prod etc.), rather than complex formats. That way you have the benefit of versioning everything in a clean way, and avoiding formatting issues introduced by environment substitution.

fbaligand commented 6 years ago

Hi @nick-george ,

First, in the future, probably other sources than environment variables will be available to interpolation using syntax ${SOME_KEY}. But the purpose of this precise issue is to support env var injection with arrays. Thus, as @jordanhenderson says, when Logstash runs inside a Docker instance, environment variables is the privileged way to inject environment configuration. Then, to me, "boolean" values are already managed. "nil" or "array" values could be specifically processed (it's not the case for now).

fbaligand commented 6 years ago

Hi @jordansissel ,

To answer to your question about values that contain comma :

That's just my humble opinion :)

jordansissel commented 6 years ago

@fbaligand if we scope this to solve providing a list of hosts from the environment instead of a generic list syntax, then I think it becomes easier to set user expectations.

Do you want to scope this to just a list of hosts? or even a specific setting on a specific plugin?

fbaligand commented 6 years ago

I think that the scope is every plugin option that is tied to environment. So the examples that come in mind are : list of host:port (typically to call elasticsearch), list of ca files (to configure https on beats input), list of path patterns (to configure file input).

I also think it is important to manage empty array : if environment variable equals "" (empty string), then it is converted to empty array. Particularly useful for ca files list (in dev environment).

And finally, as requested by @nick-george, I would find nice to manage special value "nil" to convert it to nil value. It is not specific to arrays, by the way. It can be useful for advanced options that are not defined in dev environment, but defined in prod environment.

If implementation solves all these cases, I think we cover 99% of the needs.

marfedd commented 4 years ago

Hi!

Any chance this would be implemented? It's been 3 years already and there is a pull request that has been there for almost a year.

fbaligand commented 4 years ago

@yaauie Thanks for this new feature, waited for years. But I can’t hide that I’m quite sad: the implementation only process options with “uri_list” type. A lot of plugins have hosts configured in a “array” type. Thus, all other environment arrays, are not processed like file paths array. So this PR only partially fulfills this issue.

yaauie commented 4 years ago

@yaauie Thanks for this new feature, waited for years. But I can’t hide that I’m quite sad: the implementation only process options with “uri_list” type. A lot of plugins have hosts configured in a “array” type. Thus, all other environment arrays, are not processed like file paths array. So this PR only partially fulfills this issue.

My recent patch was limited to uri lists intentionally, because it is the only place where we can make a change to meaningfully address many use-cases that doesn't break in-the-wild configurations or require breaking changes in plugins, for three reasons:

But uris can be used to represent file paths, so this also gives us a path forward for plugins that wish to use one-to-many expansion of environment- and keystore-variables to populate file lists. Let's take a look at those plugins to see if they could meaningfully be moved over to use this functionality or in some similar way address those needs.

fbaligand commented 4 years ago

Hi @yaauie , Thanks your answer. As you speak about moving plugins that contain array configuration, to benefit this feature, I try to generate a list of all options that are array-typed. So here's the list:

'facility_labels': https://www.elastic.co/guide/en/logstash/current/plugins-filters-syslog_pri.html#plugins-filters-syslog_pri-facility_labels
'severity_labels': https://www.elastic.co/guide/en/logstash/current/plugins-filters-syslog_pri.html#plugins-filters-syslog_pri-severity_labels
'hosts': https://www.elastic.co/guide/en/logstash/current/plugins-filters-memcached.html#plugins-filters-memcached-hosts
'event_hubs': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-azure_event_hubs.html#plugins-inputs-azure_event_hubs-event_hubs
'event_hub_connections': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-azure_event_hubs.html#plugins-inputs-azure_event_hubs-event_hub_connections
'decrement': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-statsd.html#plugins-outputs-statsd-decrement
'increment': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-statsd.html#plugins-outputs-statsd-increment
'fields': https://www.elastic.co/guide/en/logstash/current/plugins-filters-de_dot.html#plugins-filters-de_dot-fields
'exclude_fields': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-influxdb.html#plugins-outputs-influxdb-exclude_fields
'send_as_tags': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-influxdb.html#plugins-outputs-influxdb-send_as_tags
'versions': https://www.elastic.co/guide/en/logstash/current/plugins-codecs-netflow.html#plugins-codecs-netflow-versions
'require_jars': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jms.html#plugins-inputs-jms-require_jars
'skip_headers': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jms.html#plugins-inputs-jms-skip_headers
'skip_properties': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jms.html#plugins-inputs-jms-skip_properties
'prepared_statement_bind_values': https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_streaming.html#plugins-filters-jdbc_streaming-prepared_statement_bind_values
'tag_on_default_use': https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_streaming.html#plugins-filters-jdbc_streaming-tag_on_default_use
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_streaming.html#plugins-filters-jdbc_streaming-tag_on_failure
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_static.html#plugins-filters-jdbc_static-tag_on_failure
'tag_on_default_use': https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_static.html#plugins-filters-jdbc_static-tag_on_default_use
'loaders': https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_static.html#plugins-filters-jdbc_static-loaders
'local_db_objects': https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_static.html#plugins-filters-jdbc_static-local_db_objects
'local_lookups': https://www.elastic.co/guide/en/logstash/current/plugins-filters-jdbc_static.html#plugins-filters-jdbc_static-local_lookups
'metrics': https://www.elastic.co/guide/en/logstash/current/plugins-filters-metricize.html#plugins-filters-metricize-metrics
'prepared_statement_bind_values': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-jdbc.html#plugins-inputs-jdbc-prepared_statement_bind_values
'sfdc_fields': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-salesforce.html#plugins-inputs-salesforce-sfdc_fields
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-dissect.html#plugins-filters-dissect-tag_on_failure
'lines': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-generator.html#plugins-inputs-generator-lines
'bucket': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-riak.html#plugins-outputs-riak-bucket
'indices': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-riak.html#plugins-outputs-riak-indices
'btags': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-boundary.html#plugins-outputs-boundary-btags
'failure_type_logging_whitelist': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-failure_type_logging_whitelist
'lines': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-java_generator.html#plugins-inputs-java_generator-lines
'channels': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-irc.html#plugins-inputs-irc-channels
'ssl_extra_chain_certs': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-graphite.html#plugins-inputs-graphite-ssl_extra_chain_certs
'dd_tags': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-datadog_metrics.html#plugins-outputs-datadog_metrics-dd_tags
'match': https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html#plugins-filters-date-match
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html#plugins-filters-date-tag_on_failure
'exclude_tables': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-sqlite.html#plugins-inputs-sqlite-exclude_tables
'fields': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-csv.html#plugins-outputs-csv-fields
'exclude_keys': https://www.elastic.co/guide/en/logstash/current/plugins-filters-kv.html#plugins-filters-kv-exclude_keys
'include_keys': https://www.elastic.co/guide/en/logstash/current/plugins-filters-kv.html#plugins-filters-kv-include_keys
'hosts': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-snmp.html#plugins-inputs-snmp-hosts
'fields': https://www.elastic.co/guide/en/logstash/current/plugins-filters-geoip.html#plugins-filters-geoip-fields
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-geoip.html#plugins-filters-geoip-tag_on_failure
'rooms': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-xmpp.html#plugins-outputs-xmpp-rooms
'users': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-xmpp.html#plugins-outputs-xmpp-users
'follows': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-twitter.html#plugins-inputs-twitter-follows
'keywords': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-twitter.html#plugins-inputs-twitter-keywords
'languages': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-twitter.html#plugins-inputs-twitter-languages
'meter': https://www.elastic.co/guide/en/logstash/current/plugins-filters-metrics.html#plugins-filters-metrics-meter
'percentiles': https://www.elastic.co/guide/en/logstash/current/plugins-filters-metrics.html#plugins-filters-metrics-percentiles
'rates': https://www.elastic.co/guide/en/logstash/current/plugins-filters-metrics.html#plugins-filters-metrics-rates
'tags': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-azure_event_hubs.html#plugins-inputs-azure_event_hubs-tags
'clones': https://www.elastic.co/guide/en/logstash/current/plugins-filters-clone.html#plugins-filters-clone-clones
'multi_value': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-zabbix.html#plugins-outputs-zabbix-multi_value
'overwrite': https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#plugins-filters-grok-overwrite
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-grok.html#plugins-filters-grok-tag_on_failure
'ignore_metadata': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-gelf.html#plugins-outputs-gelf-ignore_metadata
'level': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-gelf.html#plugins-outputs-gelf-level
'hosts': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-lumberjack.html#plugins-outputs-lumberjack-hosts
'include_path': https://www.elastic.co/guide/en/logstash/current/plugins-codecs-protobuf.html#plugins-codecs-protobuf-include_path
'transliterate': https://www.elastic.co/guide/en/logstash/current/plugins-filters-i18n.html#plugins-filters-i18n-transliterate
'exclude_metrics': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-graphite.html#plugins-outputs-graphite-exclude_metrics
'include_metrics': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-graphite.html#plugins-outputs-graphite-include_metrics
'hosts': https://www.elastic.co/guide/en/logstash/current/plugins-filters-elasticsearch.html#plugins-filters-elasticsearch-hosts
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-elasticsearch.html#plugins-filters-elasticsearch-tag_on_failure
'arguments': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-rabbitmq.html#plugins-inputs-rabbitmq-arguments
'channels': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-irc.html#plugins-outputs-irc-channels
'exclude_metrics': https://www.elastic.co/guide/en/logstash/current/plugins-codecs-graphite.html#plugins-codecs-graphite-exclude_metrics
'include_metrics': https://www.elastic.co/guide/en/logstash/current/plugins-codecs-graphite.html#plugins-codecs-graphite-include_metrics
'topics': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-kafka.html#plugins-inputs-kafka-topics
'docinfo_fields': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-elasticsearch.html#plugins-inputs-elasticsearch-docinfo_fields
'hosts': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-elasticsearch.html#plugins-inputs-elasticsearch-hosts
'filters': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-cloudwatch.html#plugins-inputs-cloudwatch-filters
'metrics': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-cloudwatch.html#plugins-inputs-cloudwatch-metrics
'statistics': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-cloudwatch.html#plugins-inputs-cloudwatch-statistics
'community': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-snmptrap.html#plugins-inputs-snmptrap-community
'ssl_certificate_authorities': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-tcp.html#plugins-inputs-tcp-ssl_certificate_authorities
'ssl_extra_chain_certs': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-tcp.html#plugins-inputs-tcp-ssl_extra_chain_certs
'gsub': https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-gsub
'lowercase': https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-lowercase
'strip': https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-strip
'uppercase': https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-uppercase
'capitalize': https://www.elastic.co/guide/en/logstash/current/plugins-filters-mutate.html#plugins-filters-mutate-capitalize
'channels': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-juggernaut.html#plugins-outputs-juggernaut-channels
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-json.html#plugins-filters-json-tag_on_failure
'dd_tags': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-datadog.html#plugins-outputs-datadog-dd_tags
'ranges': https://www.elastic.co/guide/en/logstash/current/plugins-filters-range.html#plugins-filters-range-ranges
'facility_labels': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-syslog.html#plugins-inputs-syslog-facility_labels
'severity_labels': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-syslog.html#plugins-inputs-syslog-severity_labels
'add_tag': https://www.elastic.co/guide/en/logstash/current/plugins-filters-aggregate.html#plugins-filters-aggregate-add_tag
'remove_field': https://www.elastic.co/guide/en/logstash/current/plugins-filters-aggregate.html#plugins-filters-aggregate-remove_field
'remove_tag': https://www.elastic.co/guide/en/logstash/current/plugins-filters-aggregate.html#plugins-filters-aggregate-remove_tag
'fields': https://www.elastic.co/guide/en/logstash/current/plugins-codecs-cef.html#plugins-codecs-cef-fields
'cipher_suites': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-http.html#plugins-inputs-http-cipher_suites
'ssl_certificate_authorities': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-http.html#plugins-inputs-http-ssl_certificate_authorities
'coalesce': https://www.elastic.co/guide/en/logstash/current/plugins-filters-alter.html#plugins-filters-alter-coalesce
'condrewrite': https://www.elastic.co/guide/en/logstash/current/plugins-filters-alter.html#plugins-filters-alter-condrewrite
'condrewriteother': https://www.elastic.co/guide/en/logstash/current/plugins-filters-alter.html#plugins-filters-alter-condrewriteother
'patterns_dir': https://www.elastic.co/guide/en/logstash/current/plugins-codecs-multiline.html#plugins-codecs-multiline-patterns_dir
'rooms': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-xmpp.html#plugins-inputs-xmpp-rooms
'arguments': https://www.elastic.co/guide/en/logstash/current/plugins-outputs-rabbitmq.html#plugins-outputs-rabbitmq-arguments
'tag_on_failure': https://www.elastic.co/guide/en/logstash/current/plugins-filters-urldecode.html#plugins-filters-urldecode-tag_on_failure
'timeout_tags': https://www.elastic.co/guide/en/logstash/current/plugins-filters-aggregate.html#plugins-filters-aggregate-timeout_tags
'resolve': https://www.elastic.co/guide/en/logstash/current/plugins-filters-dns.html#plugins-filters-dns-resolve
'reverse': https://www.elastic.co/guide/en/logstash/current/plugins-filters-dns.html#plugins-filters-dns-reverse
'cipher_suites': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html#plugins-inputs-beats-cipher_suites
'ssl_certificate_authorities': https://www.elastic.co/guide/en/logstash/current/plugins-inputs-beats.html#plugins-inputs-beats-ssl_certificate_authorities

What do you think about?

kyrias commented 4 years ago

We were also just hit by this when trying to pass in multiple pipeline IDs to xpack.management.pipeline.id through an environment variable when running logstash in Kubernetes.