Closed devopsberlin closed 6 years ago
@devopsberlin before building the gem, you will need to vendor the jar dependencies; this will ensure that they get packaged within the .gem
and are available on the classpath when Logstash attempts to load them.
rake vendor && gem build logstash-output-kafka.gemspec
@yaauie Thanks!
@yaauie, when running the command rake vendor && gem build logstash-output-kafka.gemspec
I got the below error, can you please advise ?
Thanks
rake aborted!
LoadError: cannot load such file -- logstash/devutils/rake
/home/user/.rvm/rubies/ruby-2.4.1/lib/ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in `require'
/home/user/.rvm/rubies/ruby-2.4.1/lib/ruby/2.4.0/rubygems/core_ext/kernel_require.rb:55:in `require'
logstash-sqs-kafka-local/logstash-output-kafka/Rakefile:1:in `<top (required)>'
/home/user/.rvm/gems/ruby-2.4.1@global/gems/rake-12.0.0/lib/rake/rake_module.rb:28:in `load'
/home/user/.rvm/gems/ruby-2.4.1@global/gems/rake-12.0.0/lib/rake/rake_module.rb:28:in `load_rakefile'
/home/user/.rvm/gems/ruby-2.4.1@global/gems/rake-12.0.0/lib/rake/application.rb:687:in `raw_load_rakefile'
/home/user/.rvm/gems/ruby-2.4.1@global/gems/rake-12.0.0/lib/rake/application.rb:96:in `block in load_rakefile'
/home/user/.rvm/gems/ruby-2.4.1@global/gems/rake-12.0.0/lib/rake/application.rb:178:in `standard_exception_handling'
/home/user/.rvm/gems/ruby-2.4.1@global/gems/rake-12.0.0/lib/rake/application.rb:95:in `load_rakefile'
/home/user/.rvm/gems/ruby-2.4.1@global/gems/rake-12.0.0/lib/rake/application.rb:79:in `block in run'
/home/user/.rvm/gems/ruby-2.4.1@global/gems/rake-12.0.0/lib/rake/application.rb:178:in `standard_exception_handling'
/home/user/.rvm/gems/ruby-2.4.1@global/gems/rake-12.0.0/lib/rake/application.rb:77:in `run'
/home/user/.rvm/gems/ruby-2.4.1@global/gems/rake-12.0.0/exe/rake:27:in `<top (required)>'
/home/user/.rvm/rubies/ruby-2.4.1/bin/rake:22:in `load'
/home/user/.rvm/rubies/ruby-2.4.1/bin/rake:22:in `<main>'
@devopsberlin it appears that one of the dependencies isn't on your loadpath; do you have the gem's dependencies installed? From the looks of it, the logstash-devutils
dependency is missing (and others may be too).
This project, like most Ruby projects, uses bundler
to manage dependencies; the following will check for the bundle
command, and install the gem that provides it if it is missing:
command -v bundle || gem install bundler
Once bundle
is available, you will need to invoke it from the root of the project to install the project dependencies; this will inspect the Gemfile
in the project's root, resolve the dependency graph, and install them.
bundle install
Once the bundle is installed, the following should vendor the jar dependencies:
rake vendor
Once the jar dependencies are vendored, the gem can be built:
gem build logstash-output-kafka.gemspec
@devopsberlin since the jar dependencies are loaded using jruby, you'll also need to be using jruby 9.1.x, with $JAVA_HOME
pointing to the path of Java 8 runtime.
@yaauie, thank you for your clear and helpful explanation, it works.
If you don't mind me asking one more question, I am using docker.elastic.co/logstash/logstash-oss:6.0.0
with kafka output plugin
, Logstash output kafka plugin is not pushing data into kafka when one of the kafka nodes go down or get different broker id.
1/19/2018 10:36:47 PM[2018-01-19T20:36:47,283][WARN ][org.apache.kafka.clients.NetworkClient] Connection to node 1 could not be established. Broker may not be available.
1/19/2018 11:16:44 PM[2018-01-19T21:16:44,320][INFO ][logstash.outputs.kafka ] Sending batch to Kafka failed. Will retry after a delay. {:batch_size=>1, :failures=>1, :sleep=>0.01}
1/19/2018 11:46:44 PM[2018-01-19T21:46:44,876][INFO ][logstash.outputs.kafka ] Sending batch to Kafka failed. Will retry after a delay. {:batch_size=>1, :failures=>1, :sleep=>0.01}
retries
parameter might not help in the case because the broker id can be different from the one when logstash container start with (for example kafka brokers can change from [1,2,3] to [1,2,4].
# If you choose to set `retries`, a value greater than zero will cause the
# client to only retry a fixed number of times. This will result in data loss
# if a transient error outlasts your retry count.
#
https://www.elastic.co/guide/en/logstash/5.6/plugins-outputs-kafka.html
Is there a way to force logstash to exit / kill the process in this case, this way a new logstash container will be relaunch with the new brokers ids and the service will start properly.
So right now I have two solutions for this, first is to restart the service which is running the logstash containers each time I change the kafka nodes, or to make some changes in the plugin to kill the process when catching an error message, but I wonder maybe do you have a better soultion ?
Thanks again for all your help!
output {
kafka {
bootstrap_servers=> "kafka:9092"
topic_id=> "topic"
codec=> "json"
message_key=> "key"
}
#stdout { codec => "rubydebug" }
}
Thanks
@devopsberlin I’m not sure how to answer your second question, but see that you’ve already filed it as https://github.com/elastic/logstash/issues/8996 — I’ll flag this with the Logstash team tomorrow and try to get an answer/fix prioritised.
@yaauie, thank you for your clear and helpful explanation, it works.
Closing — original issue addressed; secondary issue filed elsewhere
I am getting below error after build and install my plugin, using
docker.elastic.co/logstash/logstash-oss:6.0.0
.Made my changes and then build:
Remove old plugin and install the new one:
Error:
Please could you kindly suggest how to fix this issue. Thanks.