ZeerBit / zeerbit-ecs-pipeline

Elastic Common Schema (ECS) ingest pipeline for Zeek network traffic analyzer
7 stars 1 forks source link

Issue when runing ./fluent-bit.start #31

Open aboubamba opened 3 years ago

aboubamba commented 3 years ago

When I try to launch ./fluent-bit.start I get a command not found. Do I have to install fluentbit a certain way in order for the command to launch?

sharp@nuc_linux:/usr/local/etc/fluent-bit$ sudo ./fluent-bit.start sudo: ./fluent-bit.start: command not found

bortok commented 3 years ago

fluent-bit.start script assumes that fluent-bit is installed in /usr/local/bin, see the last line: sudo -E -u fluentbit /usr/local/bin/fluent-bit -c "${FBIT_PATH}/fluent-bit.conf"

But I think your problem is that you are not running sudo ./fluent-bit.start from the directory with fluent-bit.start script. Could you please paste results of ls -l /usr/local/etc/fluent-bit here?

aboubamba commented 3 years ago

Hi,

I ended up changing the last line of the script to this: sudo -E -u fluentbit /opt/td-agent-bit/bin/td-agent-bit -c "${FBIT_PATH}/fluent-bit.conf" and now it runs but I'm getting the error below. I use opensearch so when the data is sent from Zeek to Zeerbit I see this error in the console. And the data is not passed to open-dashboard.

[2021-10-20T09:11:11,961][INFO ][o.o.j.s.JobSweeper ] [node-1] Running full sweep [2021-10-20T09:16:11,963][INFO ][o.o.j.s.JobSweeper ] [node-1] Running full sweep [2021-10-20T09:21:11,964][INFO ][o.o.j.s.JobSweeper ] [node-1] Running full sweep [2021-10-20T09:23:24,661][WARN ][r.suppressed ] [node-1] path: /fluent_bit/_search, params: {ignore_unavailable=true, preference=1634734942370, index=fluent_bit, timeout=30000ms, track_total_hits=true} org.opensearch.action.search.SearchPhaseExecutionException: all shards failed at org.opensearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:580) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:336) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:615) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.action.search.AbstractSearchAsyncAction.onShardFailure(AbstractSearchAsyncAction.java:412) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.action.search.AbstractSearchAsyncAction.access$100(AbstractSearchAsyncAction.java:82) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.action.search.AbstractSearchAsyncAction$1.onFailure(AbstractSearchAsyncAction.java:270) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.action.search.SearchExecutionStatsCollector.onFailure(SearchExecutionStatsCollector.java:86) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:72) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.action.search.SearchTransportService$ConnectionCountingHandler.handleException(SearchTransportService.java:422) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.transport.TransportService$6.handleException(TransportService.java:664) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.security.transport.SecurityInterceptor$RestoringTransportResponseHandler.handleException(SecurityInterceptor.java:308) [opensearch-security-1.1.0.0.jar:1.1.0.0] at org.opensearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1217) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:1326) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1300) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.transport.TaskTransportChannel.sendResponse(TaskTransportChannel.java:74) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.transport.TransportChannel.sendErrorResponse(TransportChannel.java:69) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.action.support.ChannelActionListener.onFailure(ChannelActionListener.java:64) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.action.ActionRunnable.onFailure(ActionRunnable.java:101) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:52) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:57) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:756) [opensearch-1.1.0.jar:1.1.0] at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:50) [opensearch-1.1.0.jar:1.1.0] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?] at java.lang.Thread.run(Thread.java:832) [?:?] Caused by: org.opensearch.tasks.TaskCancelledException: cancelled task with reason: channel closed at org.opensearch.search.query.QueryPhase.lambda$executeInternal$3(QueryPhase.java:298) ~[opensearch-1.1.0.jar:1.1.0] at org.opensearch.search.internal.ContextIndexSearcher$MutableQueryTimeout.checkCancelled(ContextIndexSearcher.java:383) ~[opensearch-1.1.0.jar:1.1.0] at org.opensearch.search.internal.ContextIndexSearcher.searchLeaf(ContextIndexSearcher.java:223) ~[opensearch-1.1.0.jar:1.1.0] at org.opensearch.search.internal.ContextIndexSearcher.search(ContextIndexSearcher.java:212) ~[opensearch-1.1.0.jar:1.1.0] at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:443) ~[lucene-core-8.9.0.jar:8.9.0 05c8a6f0163fe4c330e93775e8e91f3ab66a3f80 - mayyasharipova - 2021-06-10 17:50:37] at org.opensearch.search.query.QueryPhase.searchWithCollector(QueryPhase.java:354) ~[opensearch-1.1.0.jar:1.1.0] at org.opensearch.search.query.QueryPhase.executeInternal(QueryPhase.java:309) ~[opensearch-1.1.0.jar:1.1.0] at org.opensearch.search.query.QueryPhase.execute(QueryPhase.java:161) ~[opensearch-1.1.0.jar:1.1.0] at org.opensearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:386) ~[opensearch-1.1.0.jar:1.1.0] at org.opensearch.search.SearchService.executeQueryPhase(SearchService.java:445) ~[opensearch-1.1.0.jar:1.1.0] at org.opensearch.search.SearchService.access$500(SearchService.java:155) ~[opensearch-1.1.0.jar:1.1.0] at org.opensearch.search.SearchService$2.lambda$onResponse$0(SearchService.java:415) ~[opensearch-1.1.0.jar:1.1.0] at org.opensearch.action.ActionRunnable.lambda$supply$0(ActionRunnable.java:71) ~[opensearch-1.1.0.jar:1.1.0] at org.opensearch.action.ActionRunnable$2.doRun(ActionRunnable.java:86) ~[opensearch-1.1.0.jar:1.1.0] at org.opensearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:50) ~[opensearch-1.1.0.jar:1.1.0] ... 6 more [2021-10-20T09:26:11,965][INFO ][o.o.j.s.JobSweeper ] [node-1] Running full sweep

bortok commented 3 years ago

I think it would be a good idea to validate ZeetBit compatibility with OpenSearch. Could you please share your fluent-bit.conf, with credentials removed? Also, what istd-agent-bit version?

aboubamba commented 3 years ago

td-agent-bit/bionic,now 1.8.8 amd64 [installed] this below works, I see the index in opendashboard the "cpu metrics" stuff comes thru no issue. But it's the zeek stuff that is not working.

[INPUT] name cpu tag cpu.local

# Read interval (sec) Default: 1
interval_sec 300

[OUTPUT] name es match * host 127.0.0.1 port 9200 index fluent_bit type cpu_metrics tls On tls.verify Off tls.ca_file /home/sharp/root-ca.pem http_user http_passwd

~ ~

fluent-bit.start

!/bin/bash

export ES_HOST=127.0.0.1 export ES_PORT=9200 export ES_USER= export ES_PASSWORD=

This removes the need for Time_Offset parameter in parsers.conf

See https://github.com/fluent/fluent-bit/issues/326

export TZ=UTC

export FBIT_PATH="/usr/local/etc/fluent-bit/zeek" export FBIT_LOG="/var/log/fluent-bit.log" export LUA_PATH="${FBIT_PATH}/?.lua;" export TLS_MODE=On

export TLS_CA_PATH="/home/sharp/opensearch-1.1.0/config"

export TLS_CA_PATH="/home/sharp/root-ca-key.pem"

sudo -E -u fluentbit /opt/td-agent-bit/bin/td-agent-bit -c "${FBIT_PATH}/fluent-bit.conf" ~ ~

opensearch.yml

node.name: node-1 network.host: localhost

plugins.security.ssl.transport.pemcert_filepath: esnode.pem plugins.security.ssl.transport.pemkey_filepath: esnode-key.pem plugins.security.ssl.transport.pemtrustedcas_filepath: root-ca.pem plugins.security.ssl.transport.enforce_hostname_verification: false plugins.security.ssl.http.enabled: true plugins.security.ssl.http.pemcert_filepath: esnode.pem plugins.security.ssl.http.pemkey_filepath: esnode-key.pem plugins.security.ssl.http.pemtrustedcas_filepath: root-ca.pem plugins.security.allow_unsafe_democertificates: true plugins.security.allow_default_init_securityindex: true plugins.security.authcz.admin_dn:

plugins.security.audit.type: internal_opensearch plugins.security.enable_snapshot_restore_privilege: true plugins.security.check_snapshot_restore_write_privileges: true plugins.security.restapi.roles_enabled: ["all_access", "security_rest_api_access"] plugins.security.system_indices.enabled: true plugins.security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert", ".opendistro-anomaly-results", ".opendistro-anomaly-detector", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-", ".opendistro-notifications-", ".opendistro-notebooks", ".opendistro-asynchronous-search-response", ".replication-metadata-store"] node.max_local_storage_nodes: 3

bortok commented 3 years ago

If you look into fluent-bit.conf file, the index name prefix used by default is

Logstash_Prefix logstash-ecs-fluentbit

Could you check if you see such indexes created? I've noticed the error message you provided references fluent_bit index instead:

[2021-10-20T09:23:24,661][WARN ][r.suppressed ] [node-1] path: /fluent_bit*/_search,

If yes, then there is one step needed that I missed in the README, which is to edit templates.update to also fill in correct access info, and then run it. The script would make sure logstash-ecs-fluentbit* index fields would have proper format.

Once done, you'll find Zeek data on those indexes.

aboubamba commented 3 years ago

AB:

In Elastic I do not see any new indexes starting by logstash. I had to fill in the details in templates.update, it was empty After doing that, when I try to run the file I get the below.

sharp@nuc_linux:/usr/local/etc/fluent-bit/zeek$ sudo ./templates.update curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: https://curl.haxx.se/docs/sslcerts.html

curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. curl: (60) SSL certificate problem: unable to get local issuer certificate More details here: https://curl.haxx.se/docs/sslcerts.html

bortok commented 3 years ago

OK, you have TLS cert verification issue when running templates.update. But let's get back to this later, since there are no indexes created with Zeek data anyway. Please, attach fluent-bit.log file here, by default, it is in /var/log/fluent-bit.log.

aboubamba commented 3 years ago

It didn't exist so I created it with touch and gave it fluentbit:fluentbit ownership.

-rw-r--r-- 1 fluentbit fluentbit 5035 Oct 21 09:08 /var/log/fluent-bit.log

[2021/10/21 13:08:21] [ info] [engine] started (pid=9573) [2021/10/21 13:08:21] [ info] [storage] version=1.1.4, initializing... [2021/10/21 13:08:21] [ info] [storage] in-memory [2021/10/21 13:08:21] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128 [2021/10/21 13:08:21] [ info] [cmetrics] version=0.2.2 [2021/10/21 13:08:21] [error] [tls] /tmp/fluent-bit-1.8.8/src/tls/mbedtls.c:186 X509 - Read/write of file failed [2021/10/21 13:08:21] [error] [TLS] error reading certificates from /home/sharp/root-ca-key.pem [2021/10/21 13:08:21] [error] [tls] could not create TLS backend [2021/10/21 13:08:21] [error] [output es.0] error initializing TLS context [2021/10/21 13:08:21] [ info] [input] pausing tail.0 [2021/10/21 13:08:21] [ info] [input] pausing tail.1 [2021/10/21 13:08:21] [ info] [input] pausing tail.2 [2021/10/21 13:08:21] [ info] [input] pausing tail.3 [2021/10/21 13:08:21] [ info] [input] pausing tail.4 [2021/10/21 13:08:21] [ info] [input] pausing tail.5 sharp@nuc_linux:/usr/local/etc/fluent-bit/zeek$

bortok commented 3 years ago

Based on the log, fluent-bit is unable to connect to OpenSearch due to TLS cert issue. There is an error message saying it couldn't read a root CA directory.

[2021/10/21 13:08:21] [error] [TLS] error reading certificates from /home/sharp/root-ca-key.pem

Could you try replacing

export TLS_CA_PATH="/home/sharp/root-ca-key.pem"

with

export TLS_CA_PATH="/usr/local/etc/tls"

and moving root-ca-key.pem under /usr/local/etc/tls - or something like that? Also make sure fluent-bit user has r+x access to that directory tree. It is typically not the case for home directories, and I think this is the reason (or one of them) it can't connect.

aboubamba commented 3 years ago

AB:

I get a different error now in the fluent-bit logs

[2021/10/21 13:22:33] [ info] [input] pausing tail.5 [2021/10/22 00:54:36] [ info] [engine] started (pid=15660) [2021/10/22 00:54:36] [ info] [storage] version=1.1.4, initializing... [2021/10/22 00:54:36] [ info] [storage] in-memory [2021/10/22 00:54:36] [ info] [storage] normal synchronization mode, checksum disabled, max_chunks_up=128 [2021/10/22 00:54:36] [ info] [cmetrics] version=0.2.2 [2021/10/22 00:54:36] [ info] [sp] stream processor started [2021/10/22 00:54:36] [ info] [input:tail:tail.0] inotify_fs_add(): inode=5644487 watch_fd=1 name=/usr/local/zeek/spool/zeek/conn.log [2021/10/22 00:54:36] [ info] [input:tail:tail.1] inotify_fs_add(): inode=5644886 watch_fd=1 name=/usr/local/zeek/spool/zeek/dhcp.log [2021/10/22 00:54:37] [ info] [input:tail:tail.2] inotify_fs_add(): inode=5644455 watch_fd=1 name=/usr/local/zeek/spool/zeek/dns.log [2021/10/22 00:54:41] [error] [tls] /tmp/fluent-bit-1.8.8/src/tls/mbedtls.c:380 X509 - Certificate verification failed, e.g. CRL, CA or signature check [2021/10/22 00:54:41] [ warn] [engine] failed to flush chunk '15660-1634864077.188671191.flb', retry in 11 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0) [2021/10/22 00:54:41] [error] [tls] /tmp/fluent-bit-1.8.8/src/tls/mbedtls.c:380 X509 - Certificate verification failed, e.g. CRL, CA or signature check [2021/10/22 00:54:41] [ warn] [engine] failed to flush chunk '15660-1634864076.842457385.flb', retry in 6 seconds: task_id=1, input=tail.2 > output=es.0 (out_id=0) [2021/10/22 00:54:47] [error] [tls] /tmp/fluent-bit-1.8.8/src/tls/mbedtls.c:380 X509 - Certificate verification failed, e.g. CRL, CA or signature check [2021/10/22 00:54:47] [ warn] [engine] chunk '15660-1634864076.842457385.flb' cannot be retried: task_id=1, input=tail.2 > output=es.0 [2021/10/22 00:54:52] [error] [tls] /tmp/fluent-bit-1.8.8/src/tls/mbedtls.c:380 X509 - Certificate verification failed, e.g. CRL, CA or signature check [2021/10/22 00:54:52] [ warn] [engine] chunk '15660-1634864077.188671191.flb' cannot be retried: task_id=0, input=tail.0 > output=es.0 [2021/10/22 00:55:01] [error] [tls] /tmp/fluent-bit-1.8.8/src/tls/mbedtls.c:380 X509 - Certificate verification failed, e.g. CRL, CA or signature check [2021/10/22 00:55:01] [ warn] [engine] failed to flush chunk '15660-1634864096.890131728.flb', retry in 8 seconds: task_id=0, input=tail.0 > output=es.0 (out_id=0) [2021/10/22 00:55:06] [error] [tls] /tmp/fluent-bit-1.8.8/src/tls/mbedtls.c:380 X509 - Certificate verification failed, e.g. CRL, CA or signature check [2021/10/22 00:55:06] [ warn] [engine] failed to flush chunk '15660-1634864103.887510889.flb', retry in 7 seconds: task_id=1, input=tail.0 > output=es.0 (out_id=0) [2021/10/22 00:55:09] [error] [tls] /tmp/fluent-bit-1.8.8/src/tls/mbedtls.c:380 X509 - Certificate verification failed, e.g. CRL, CA or signature check [2021/10/22 00:55:09] [ warn] [engine] chunk '15660-1634864096.890131728.flb' cannot be retried: task_id=0, input=tail.0 > output=es.0 [2021/10/22 00:55:13] [error] [tls] /tmp/fluent-bit-1.8.8/src/tls/mbedtls.c:380 X509 - Certificate verification failed, e.g. CRL, CA or signature check [2021/10/22 00:55:13] [ warn] [engine] chunk '15660-1634864103.887510889.flb' cannot be retried: task_id=1, input=tail.0 > output=es.0 sharp@nuc_linux:~$

bortok commented 3 years ago

TLS certificate validation fails. You maybe using a self-signed certificate. Please add the following line to the end of fluent-bit.com and try again

tls.verify Off
aboubamba commented 3 years ago

It works. Altho I'm still seeing these in the console. But I see the index in Opendashboard and I'm able to see data. Thank you for your help!

[2021-10-21T23:20:54,518][WARN ][o.o.h.AbstractHttpServerTransport] [node-1] caught exception while handling client http traffic, closing connection Netty4HttpChannel{localAddress=/127.0.0.1:9200, remoteAddress=/127.0.0.1:44748} io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:478) ~[netty-codec-4.1.59.Final.jar:4.1.59.Final] at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-codec-4.1.59.Final.jar:4.1.59.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final] at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.59.Final.jar:4.1.59.Final] at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.59.Final.jar:4.1.59.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.59.Final.jar:4.1.59.Final] at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.59.Final.jar:4.1.59.Final] at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.59.Final.jar:4.1.59.Final] at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.59.Final.jar:4.1.59.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719) [netty-transport-4.1.59.Final.jar:4.1.59.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:620) [netty-transport-4.1.59.Final.jar:4.1.59.Final] at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:583) [netty-transport-4.1.59.Final.jar:4.1.59.Final] at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.59.Final.jar:4.1.59.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.59.Final.jar:4.1.59.Final] at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.59.Final.jar:4.1.59.Final] at java.lang.Thread.run(Thread.java:832) [?:?] Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate at sun.security.ssl.Alert.createSSLException(Alert.java:131) ~[?:?] at sun.security.ssl.Alert.createSSLException(Alert.java:117) ~[?:?] at sun.security.ssl.TransportContext.fatal(TransportContext.java:356) ~[?:?] at sun.security.ssl.Alert$AlertConsumer.consume(Alert.java:293) ~[?:?] at sun.security.ssl.TransportContext.dispatch(TransportContext.java:202) ~[?:?] at sun.security.ssl.SSLTransport.decode(SSLTransport.java:171) ~[?:?] at sun.security.ssl.SSLEngineImpl.decode(SSLEngineImpl.java:736) ~[?:?] at sun.security.ssl.SSLEngineImpl.readRecord(SSLEngineImpl.java:691) ~[?:?] at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:506) ~[?:?] at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:482) ~[?:?] at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:637) ~[?:?] at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:282) ~[netty-handler-4.1.59.Final.jar:4.1.59.Final] at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1387) ~[netty-handler-4.1.59.Final.jar:4.1.59.Final] at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1282) ~[netty-handler-4.1.59.Final.jar:4.1.59.Final] at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1329) ~[netty-handler-4.1.59.Final.jar:4.1.59.Final] at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:508) ~[netty-codec-4.1.59.Final.jar:4.1.59.Final] at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:447) ~[netty-codec-4.1.59.Final.jar:4.1.59.Final] ... 16 more

bortok commented 3 years ago

This would be fixed by replacing a certificate on your OpenSearch server to a trusted one. I've also made changes to templates.update to ignore TLS validation (-k parameter for curl). Please update your copy as well and re-run templates.update. This will make sure that all new indexes for Zeek data would have proper field types like numbers for ports, IPs for IP addresses and so on. Otherwise everything is ingested as strings.

aboubamba commented 3 years ago

Hi Alex,

It gives me an argument error. See below.

sharp@nuc_linux:/usr/local/etc/fluent-bit/zeek$ ./templates.update {"acknowledged":true}{"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"unknown setting [index.lifecycle.name] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"}],"type":"illegal_argument_exception","reason":"unknown setting [index.lifecycle.name] please check that any required plugins are installed, or check the breaking changes documentation for removed settings"},"status":400}{"acknowledged":true}sharp@nuc_linux:/usr/local/etc/fluent-bit/zeek$ sudo ./templates.update

!/bin/bash

export ES_HOST=127.0.0.1 export ES_PORT=9200 export ES_USER=** export ES_PASSWORD=

curl -k --user $ES_USER:$ES_PASSWORD -XPUT "https://$ES_HOST:$ES_PORT/_template/logstash-ecs_template" --header "Content-Type: application/json" -d @'logstash-ecs_template.json' curl -k --user $ES_USER:$ES_PASSWORD -XPUT "https://$ES_HOST:$ES_PORT/_template/logstash-ecs-fluentbit_template" --header "Content-Type: application/json" -d @'logstash-ecs-fluentbit_template.json' curl -k --user $ES_USER:$ES_PASSWORD -XPUT "https://$ES_HOST:$ES_PORT/_template/logstash-ecs-zeek-mappings_template" --header "Content-Type: application/json" -d @'logstash-ecs-zeek-mappings_template.json'

bortok commented 3 years ago

Looks like we stumbled on some breaking changes between ES/OS versions. Which OpenSearch version are you using?

aboubamba commented 3 years ago

Possibly, It's Open Search 1.1.0

bortok commented 3 years ago

Abou, could you please update logstash-ecs-fluentbit_template.json to match the changes I just committed and try running ./templates.update again? This is the link to see the changes: 92fccc98fd4ede6760b4bd9351adff3251a04023