Closed kfox1111 closed 2 months ago
It probably can be wired together by using Fluent Bit HTTP Output Plugin and Logstash HTTP Input Plugin. There is even support for MessagePack in Logstash.
Related to #352.
@kfox1111 @wkruse would you please point me to the logstash HTTP input spec ?, meaning "how the HTTP payload" needs to be formatted ? (I could not find the info)
I haven't used the HTTP input plugin, but based on other plugins, and whats in the docs, I think the http input plugin is just a transport .I can be paired up with a codec plugin, (defaults to their json one), which I think is documented here: https://www.elastic.co/guide/en/logstash/current/plugins-codecs-json.html Though for continuous streams, the multiline one might be better.
Based on this page though: http://brewhouse.io/blog/2014/11/04/big-data-with-elk-stack.html
It really looks like it just takes json data and passes it along. one document per log entry and the content can be literally anything. It doens't appear to do any processing of the data out of the box, so the schema's probably not important.
So for logs to be forwarded to logstash server from fluentBit need to use this?
@wkruse @edsiper @kfox1111 @viveksamaga Now I'm searching solutions for sending logs with fluent bit and save it with logstash How about using Fluent Bit Forward Output Plugin and Logstash TCP Input Plugin rathar than HTTP Input and Output Plugins?
@sicriops did you managed to configure that?
I tried fluent-bit with the fluent plugin for Logstash, that doesn't work. I was hoping to just use logstash as my aggregator now I'll have to deploy fluentd for that.
I just tested @wkruse 's proposal about forward output and TCP input with Fluent codec and didn't work.
This worked for me:
fluentbit
output configuration:
[OUTPUT]
Name http
Match *
Host logstash
Port 12345
Format json
logstash
input configuration:
input {
http {
port => 12345
}
}
As fluentbit
sends a correct content_type
header, Logstash interprets it and treats information correctly.
Tested also with msgpack
, but didn't work. It raised me the following error:
logstash_1 | [2018-08-22T10:06:40,031][ERROR][logstash.inputs.http ] unable to process event. {:request=>{"request_method"=>"POST", "request_path"=>"/", "request_uri"=>"/", "http_version"=>"HTTP/1.1", "http_host"=>"logstash:12345", "content_length"=>"168", "content_type"=>"application/msgpack"}, :message=>"undefined method `set' for nil:NilClass\nDid you mean? send", :class=>"NoMethodError", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-codec-msgpack-3.0.7-java/lib/logstash/codecs/msgpack.rb:36:in `decode'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-http-3.0.10/lib/logstash/inputs/http.rb:157:in `block in run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-input-http-3.0.10/lib/logstash/util/http_compressed_requests.rb:27:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/rack-1.6.6/lib/rack/builder.rb:153:in `call'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/puma-2.16.0-java/lib/puma/server.rb:557:in `handle_request'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/puma-2.16.0-java/lib/puma/server.rb:404:in `process_client'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/puma-2.16.0-java/lib/puma/server.rb:270:in `block in run'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/puma-2.16.0-java/lib/puma/thread_pool.rb:106:in `block in spawn_thread'"]}
Versions used:
Backing to the original requirement, maybe @kfox1111 was thinking on sending information from fluentbit
to a logstash
using beats
input.
I am not familiar with Logstash but per my understanding HTTP Output in JSON should be enough
Yes, it could be. You can even add more security by enabling HTTP auth & SSL via:
logstash
pipeline example:
input {
http {
port => 12345
ssl => true
user => "log"
password => "js5CsB3MP43gnS9J"
keystore => "/usr/share/logstash/config/keystore.jks"
keystore_password => "123pass"
add_field => { "[@metadata][http-input]" => "" }
}
}
filter { if [@metadata][http-input] { mutate { remove_field => ["headers"] } } }
output { stdout { codec => rubydebug } }
* keystore file generated via: `keytool -genkey -keyalg RSA -alias mycert -keystore keystore.jks -storepass 123pass -validity 360 -keysize 2048`
* `fluentbit` configuration:
[OUTPUT] Name http Match * Host logstash Port 12345 Format json HTTP_User log HTTP_Passwd js5CsB3MP43gnS9J tls On tls.verify Off
Maybe this is enough for @kfox1111 .
Interesting. I'll have to give that a try. Thanks. :)
btw, anyone interested into write a simple HowTo/article in Markdown ?
I can do it. Not sure about where because this is not an output
as it, but I will do a PR on https://github.com/fluent/fluent-bit-docs about this information and we can discuss it there.
@mpucholblasco thanks, for now please point me to a gist because I will publish a new tutorials section (different git repo)
Thanks, using http works. I just need to look at flattening the MESSAGE json into further fields.
Hello @edsiper , I left a gist page here: https://gist.github.com/mpucholblasco/b115a81af8832236b1be04d71146f2fe
Tell me if you need anything else.
@trevorndodds , records sent to the Logstash input are json events. This MESSAGE
field is sent by fluent-bit
, so you have to flatten it on fluent-bit
(via parsers, suppose)
@mpucholblasco thanks for the contribution! I will push the articles repo shortly.
@mpucholblasco thanks, migrating here: https://docs.fluentbit.io/tutorials/ship_to/logstash
Has anyone got this working with msgpack format? We push logs to Kafka from fluentbit to handle sudden Bursts. Our logstash service is picking logs from Kafka and stashing to ES.
Fluentbit Kafka msgpack format is neither getting parsed with logstash's msgpack codec nor with fluent codec.
Wanted to understand if there is any difference in fluentd msgpack and fluentbit's msgpack output.
Picking up on this thread, many of the examples both above and in various documentation snippets have a single logstash Host defined in the [OUTPUT] section, eg
Name http
Match *
Host logstash
Is there a recommended configuration to avoid the single point of failure introduced by pushing to just one logstash node? I'm sure I have seen an issued raised to address this, but I can't find the issue number.
I can think of a couple of approaches. If only a single host can be defined in the syntax would be to use a loadbalancer in front of the two or more logstash nodes.
Any other approaches that people are taking?
Thanks.
I am following this when trying to output winevtlog messages with fluentbit to logstash, but I am getting errors on the logstash side. It seems like it doesnt like how its sending the message.
JSON parse error, original data now in message field {:message=>"Could not set field 'ip' on object '<hostname>' to value '<ipaddress>'
나는 https://docs.fluentbit.io/tutorials/ship_to/logstash 를 보고 그대로 따라 하였다 하지만 에러가 난다 왜그런지 알 수 있을까?
[2023/11/15 05:56:25] [error] [http_client] broken connection to 1.23.456.189:12345 ? [2023/11/15 05:56:25] [error] [output:http:http.0] could not flush records to 1.23.456.189:12345 (http_do=-1) [2023/11/15 05:56:25] [ warn] [engine] failed to flush chunk '1-1700027754.562770377.flb', retry in 7 seconds: task_id=0, input=tail.0 > output=http.0 (out_id=0)
In 2024, do we still need to solve this? just asking to just close the loop or decide what to do.
I think there's a good enough workaround by just using the fluent-bit HTTP output -> logstash http input, from when I used it previously.
IIRC I saw some interesting behaviour when logstash applied backpressure, but can raise those in a separate issue, since I guess the fluent-bit HTTP output needs to handle backpressure in various ways too.
@bilbof thanks, so I will proceed to close this one.
Some sites already have a logstash daemon running and don't have direct access to elasticsearch. It would be nice if you could still use fluent-bit in this environment and push logs to logstash.