Open petergvizd opened 2 years ago
Ubuntu OS direct install: (not docker) conf: [INPUT] Name cpu Tag cpu
[OUTPUT] Name opensearch Match * Host 192.168.64.9 Port 9200 Index mars11_index Type mymars11_type TLS on
Error: /10/21 02:39:07] [error] [tls] error: unexpected EOF /10/21 02:39:07] [error] [tls] error: unexpected EOF /10/21 02:39:07] [ warn] [engine] chunk '3487-1666345138.721945093.flb' cannot be retried: task_id=> /10/21 02:39:07] [ warn] [engine] chunk '3487-1666345139.665267196.flb' cannot be retried: task_id=> /10/21 02:39:07] [error] [tls] error: unexpected EOF /10/21 02:39:07] [ warn] [engine] failed to flush chunk '3487-1666345146.687375985.flb', retry in 1> /10/21 02:39:08] [error] [tls] error: unexpected EOF /10/21 02:39:08] [ warn] [engine] chunk '3487-1666345141.656286388.flb' cannot be retried: task_id=> /10/21 02:39:08] [error] [tls] error: unexpected EOF /10/21 02:39:08] [ warn] [engine] failed to flush chunk '3487-1666345147.724985021.flb', retry in 1> ~ ~ ~
I can reproduce this bug, output from fluentbit through fluentd by forward.
FluentD:
2022-10-31 07:58:06 +0000 [warn]: #0 [input-forward-metric] unexpected error before accepting TLS connection by OpenSSL addr="10.35.112.143" host="HOSTNAME1" port=61136 error_class=OpenSSL::SSL::SSLError error="SSL_accept returned=1 errno=0 state=error: invalid alert"
2022-10-31 07:58:36 +0000 [warn]: #0 [input-forward-metric] unexpected error before accepting TLS connection by OpenSSL addr="10.35.112.143" host="HOSTNAME1" port=59134 error_class=OpenSSL::SSL::SSLError error="SSL_accept returned=1 errno=0 state=error: invalid alert"
2022-10-31 08:08:36 +0000 [warn]: #0 [input-forward-metric] unexpected error before accepting TLS connection by OpenSSL addr="10.35.112.143" host="HOSTNAME1" port=56636 error_class=OpenSSL::SSL::SSLError error="SSL_accept returned=1 errno=0 state=error: unexpected message"
FluentBit:
Oct 31 07:58:36 HOSTNAME1 fluent-bit[826]: [2022/10/31 07:58:36] [error] [output:forward:forward.1] no upstream connections available
Oct 31 07:58:36 HOSTNAME1 fluent-bit[826]: [2022/10/31 07:58:36] [ warn] [engine] failed to flush chunk '826-1667203111.514059781.flb', retry in 8 seconds: task_id=1, input=disk.1 > output=forward.1 (out_id=1)
Oct 31 07:58:44 HOSTNAME1 fluent-bit[826]: [2022/10/31 07:58:44] [ info] [engine] flush chunk '826-1667203111.514059781.flb' succeeded at retry 1: task_id=1, input=disk.1 > output=forward.1 (out_id=1)
Oct 31 08:00:06 HOSTNAME1 fluent-bit[826]: [2022/10/31 08:00:06] [error] [tls] connection #43 SSL_connect: error in error
Oct 31 08:00:06 HOSTNAME1 fluent-bit[826]: [2022/10/31 08:00:06] [error] [tls] error: unexpected EOF
Oct 31 08:00:06 HOSTNAME1 fluent-bit[826]: [2022/10/31 08:00:06] [error] [output:forward:forward.1] no upstream connections available
Oct 31 08:00:06 HOSTNAME1 fluent-bit[826]: [2022/10/31 08:00:06] [ warn] [engine] failed to flush chunk '826-1667203201.514215553.flb', retry in 7 seconds: task_id=1, input=disk.1 > output=forward.1 (out_id=1)
Oct 31 08:00:13 HOSTNAME1 fluent-bit[826]: [2022/10/31 08:00:13] [ info] [engine] flush chunk '826-1667203201.514215553.flb' succeeded at retry 1: task_id=1, input=disk.1 > output=forward.1 (out_id=1)
Oct 31 08:08:36 HOSTNAME1 fluent-bit[826]: [2022/10/31 08:08:36] [error] [tls] connection #43 SSL_connect: error in error
Oct 31 08:08:36 HOSTNAME1 fluent-bit[826]: [2022/10/31 08:08:36] [error] [tls] error: unexpected EOF
Oct 31 08:08:36 HOSTNAME1 fluent-bit[826]: [2022/10/31 08:08:36] [error] [output:forward:forward.1] no upstream connections available
Oct 31 08:08:36 HOSTNAME1 fluent-bit[826]: [2022/10/31 08:08:36] [ warn] [engine] failed to flush chunk '826-1667203711.514047393.flb', retry in 9 seconds: task_id=1, input=disk.1 > output=forward.1 (out_id=1)
Oct 31 08:08:45 HOSTNAME1 fluent-bit[826]: [2022/10/31 08:08:45] [ info] [engine] flush chunk '826-1667203711.514047393.flb' succeeded at retry 1: task_id=1, input=disk.1 > output=forward.1 (out_id=1)
The bug ist gone or fixed for me with FluentBit Version 2.0.3
Is it or isn't it gone @LeoWinterDE? I worked in that layer recently so I could take a look if it's still a problem.
I can see still the issue, even in version 2.0.3
yes things look like working but I see this error if the output is ES.
Looks like issue is finally fixed in version 2.0.6, so closing the issue
what was the issue @petergvizd
I have fluentbit 2.0.9 and the issue is still present:
[2023/03/06 15:41:31] [error] [tls] error: unexpected EOF [2023/03/06 15:41:31] [ warn] [engine] failed to flush chunk '1-1678117291.286914214.flb', retry in 11 seconds: task_id=0, input=syslog.0 > output=es.1 (out_id=1) [2023/03/06 15:41:42] [error] [tls] error: unexpected EOF [2023/03/06 15:41:42] [error] [engine] chunk '1-1678117291.286914214.flb' cannot be retried: task_id=0, input=syslog.0 > output=es.1
[OUTPUT] Name es Match be.php.monolog Host ${OPENSEARCH_HOST} Port 9200 Logstash_Format On Logstash_Prefix fluentbit Logstash_DateFormat %Y.%m.%d Time_Key_Format %Y-%m-%dT%H:%M:%S.%L Generate_ID On HTTP_User ${OPENSEARCH_USER} HTTP_Passwd ${OPENSEARCH_PASSWORD} tls On tls.verify Off
Would you be able to share some information about your setup? If you prefer to do it in private you can message me in slack.
I'd like to know which operating system version you are running fluent-bit in and the same about the opensearch server.
You can probably get enough information from the opensearch server running this command curl -vv https://opensarch_host_domain_or_address:9200
and copying the lines that start with an asterisk up to the line that says > GET
.
As for the fluent-bit host I'd like to know which operating system (distribution and version if linux) or container image you are running so I can determine if there is an issue with the openssl version.
Please don't hesitate to include as much information as possible and I think if this actually persists for you then it might be appropriate to open a new issue so it can be properly tracked.
Hi,
We are in containers fluentbit is fluent/fluent-bit:2.0.9 OpenSearch is opensearchproject/opensearch:2.5.0
I'm using self-signed certificates which are generated inside the the OpenSearch container. and I hoped that tls.verify Off will be sufficient to overcome the limitations of the fact that it's self-signed.
I also tried to change the plugin from es to OpenSearch but there is no difference. I might open a ticket but I'm not sure if it's really a bug as I'm using a self-signed certs and I'm not providing any of the:
You don't need to provide any of those tls settings fluent-bit when acting as a client and since you disabled tls.verify
it should be fine, I think you should create a new issue and if you do I'd urge you to add a detailed reproduction procedure to simplify the process.
Ok it's a little embarrassing but when I started working on repro steps / setting up minimal containers where the bug will be reproducible I found that in that "test" environment it works... so the bug must be somewhere in my configuration. I will let you know when I will find what's the real root cause. any way thanks for your help!
It's ok, thank you for letting us know, any information is valuable information, even a failure to reproduce the issue, keep it up and don't hesitate to ask for help!
@salacr were you able to sort out the issue? I get similar error when I am trying to flush records from fluentbit ocp container to logstash port. I also have v2.0.9. Below is my config and error: Error: [2023/03/14 14:50:54] [error] [upstream] connection #82 to tcp://.... timed out after 10 seconds (connection timeout) [2023/03/14 14:50:54] [error] [output:http:http.1] no upstream connections available to ## [2023/03/14 14:50:54] [ warn] [engine] failed to flush chunk '1-1678805443.598972736.flb', retry in 6 seconds: task_id=0, input=tail.0 > output=http.1 (out_id=1) [2023/03/14 14:51:00] [error] [tls] error: unexpected EOF
Config: [SERVICE] Flush 1 Log_Level info Daemon off HTTP_Server On HTTP_Listen 0.0.0.0 HTTP_Port 2020 [INPUT] Name tail Tag example. Path $logpath Skip_Long_Lines On Refresh_Interval 10 Inotify_Watcher true read_from_head false [OUTPUT] Name stdout Match example. Format json_lines json_date_key time json_date_format iso8601 [OUTPUT] Name http Match * Host $logstash.host Port $logstash.port Format json tls on tls.verify off tls.ca_file /usr/share/fluentbit/certs/ca.crt tls.crt_file /usr/share/fluentbit/certs/crt.cer tls.key_file /usr/share/fluentbit/certs/file.key
Let me know if you or someone has any thoughts. Thanks
Nope I started configuring everything from scratch and it "just works" now. Don't know where was a problem :/ (Actually it might be a problem with Opensearch as it's quite unfriendly in a term of docker provisioning so there might be some bugs in my previous instalation )
Ran into this earlier running Fluent Bit (statically linked) on an Alpine docker image. Turns out I needed to install the ca-certificates
package (apk add ca-certificates
). Probably similar for other distros if this (or a similar) package is not installed on the system.
For anybody in this thread, just a warning that setting TLS.verify Off
should not be considered a solution. That is not really any more secure than not using TLS altogether.
im seeing this error as well. i was using this docker-compose, https://github.com/opensearch-project/data-prepper/blob/main/examples/log-ingestion/fluent-bit.conf
but needed to turn 'tls on' for OpenSearch to accept a fluent-bit communication...but now i see the error this issue talks about.
in addition OpenSearch logs this exceptions later - not sure if the two are related:
opensearch | [2023-05-13T00:42:04,313][ERROR][o.o.s.s.h.n.SecuritySSLNettyHttpServerTransport] [23cfaf6da342] Exception during establishing a SSL connection: javax.net.ssl.SSLHandshakeException: Insufficient buffer remaining for AEAD cipher fragment (2). Needs to be more than tag size (16)
I faced the same issue recently. My fluent bit pods were running behind a kubernetes load balancer which was sending health probes. These health probes were causing the "[error] [tls] error: unexpected EOF" errors. To fix this, I modified the externalTrafficPolicy to Local and updated the healthCheckNodePort. This ensures Kubernetes LB send the health probes on a separate port. Refer this for configuration: https://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip
@dss010101 I'm running into the same issue as you with both fluentbit and Opensearch. As you probably saw, the opensearch issue may be a JDK problem that seems to not be resolved https://github.com/opensearch-project/security/issues/3299
As a workaround, I tried bumping TLS version down to v1.2, but then Opensearch complains about expired certificate when fluentbit attempts TLS handshake. What's weird is that fluentbit certs aren't expired, and Opensearch certs weren't expired either (but were recently renewed). Also, Opensearch dashboards is able to communicate with the OS cluster -- so there must be something specific to the handshake fluentbit and OS are attempting.
All that said, did you ever figure this out @dss010101?
Closed issue and all, but I'll update for future users -- for me, this issue goes away once we moved away from self signed certs. For some reason, renewal with self signed certs broke stuff, but if we move to LetsEncrypt certificates fluentbit is able to talk to OS again.
@dss010101 I'm running into the same issue as you with both fluentbit and Opensearch. As you probably saw, the opensearch issue may be a JDK problem that seems to not be resolved opensearch-project/security#3299
As a workaround, I tried bumping TLS version down to v1.2, but then Opensearch complains about expired certificate when fluentbit attempts TLS handshake. What's weird is that fluentbit certs aren't expired, and Opensearch certs weren't expired either (but were recently renewed). Also, Opensearch dashboards is able to communicate with the OS cluster -- so there must be something specific to the handshake fluentbit and OS are attempting.
All that said, did you ever figure this out @dss010101?
unfortunately, no and due to lack of engagement on the issue, decided to go with other libraries. im glad u figured it out though.
Bug Report
Describe the bug Looks like there is issue during recycling multiple TLS connections (when there is only one opened connection to upstream, or no TLS is used, everything works fine), that is causing error in communication between fluent-bit and fluentd. According to captured communication, it looks like from time to time fluent-bit is sending encrypted alert number 21 (probably TLS close notify) during TLS handshake.
To Reproduce With docker could be reproduced by:
fluent-bit.conf
fluentd.conf
Commands:
Error message on fluent-bit side
Error message on fluentd side
Expected behavior Multiple TLS connections should be recycled correctly without any errors.
Screenshots Screenshot from captured communication (fluent-bit: 172.19.0.3, fluentd: 172.19.0.2)
Your Environment Fluent-bit version: 1.9.7 Fluent-bit OpenSSL version: 1.1.1n Fluentd version: 1.14.0 Fluentd OpenSSL version: 1.1.1q
Additional context We would like to have possibility to dynamically scale aggregator part (fluentd) in kubernetes environment along with usage of TLS. Typically we are sending logs from multiple containers collected by fluent-bit (opening multiple upstream connections) to aggregator and in case of aggregator scale out, we would like from fluent-bit to reload new addresses of fluentd pods. To achieve this we tried usage of
net.keepalive_max_recycle
, but hit issue above.