telekom-security / tpotce

🍯 T-Pot - The All In One Multi Honeypot Platform 🐝
GNU General Public License v3.0
6.62k stars 1.06k forks source link

Elastic did not load properly. Check the server output for more information. #1618

Open atbohmer opened 1 month ago

atbohmer commented 1 month ago

Ask T-Pot Assistant

Successfully raise an issue

Before you post your issue make sure it has not been answered yet and provide ⚠️ BASIC SUPPORT INFORMATION (as requested below) if you come to the conclusion it is a new issue.

⚠️ Basic support information (commands are expected to run as root)

We happily take the time to improve T-Pot and take care of things, but we need you to take the time to create an issue that provides us with all the information we need.

Kibana failed 2 days ago working : Elastic did not load properly. Check the server output for more information. Just created a fresh Debian 12 install with latest git pull and still. :~/tpotce$ cat version 24.04.0 Elastic is GREEN, logging Kibana and heee after today's update Kibana is back again!

Kind regards, Andre

atbohmer commented 1 month ago
2024-07-11T14:13:53,240][INFO ][o.e.p.PluginsService     ] [tpotcluster-node-01] loaded module [x-pack-eql]
[2024-07-11T14:13:54,055][INFO ][o.e.e.NodeEnvironment    ] [tpotcluster-node-01] using [1] data paths, mounts [[/data (/dev/sda1)]], net usable_space [82.4gb], net total_space [96.9gb], types [ext4]
[2024-07-11T14:13:54,055][INFO ][o.e.e.NodeEnvironment    ] [tpotcluster-node-01] heap size [2gb], compressed ordinary object pointers [true]
[2024-07-11T14:13:54,203][INFO ][o.e.n.Node               ] [tpotcluster-node-01] node name [tpotcluster-node-01], node ID [w3WOKS8NQIOLL15ksnFqAQ], cluster name [tpotcluster], roles [data_frozen, ml, data_hot, transform, data_content, data_warm, master, remote_cluster_client, data, data_cold, ingest]
[2024-07-11T14:13:58,399][INFO ][o.e.f.FeatureService     ] [tpotcluster-node-01] Registered local node features [data_stream.auto_sharding, data_stream.lifecycle.global_retention, data_stream.rollover.lazy, desired_node.version_deprecated, esql.agg_values, esql.async_query, esql.base64_decode_encode, esql.disable_nullable_opts, esql.from_options, esql.metadata_fields, esql.mv_sort, esql.spatial_points_from_source, esql.spatial_shapes, esql.st_centroid_agg, esql.st_contains_within, esql.st_disjoint, esql.st_intersects, esql.st_x_y, esql.string_literal_auto_casting, esql.string_literal_auto_casting_extended, features_supported, file_settings, health.dsl.info, health.extended_repository_indicator, knn_retriever_supported, license-trial-independent-version, retrievers_supported, rrf_retriever_supported, standard_retriever_supported, usage.data_tiers.precalculate_stats]
[2024-07-11T14:13:58,918][INFO ][o.e.t.a.APM              ] [tpotcluster-node-01] Sending apm metrics is disabled
[2024-07-11T14:13:58,919][INFO ][o.e.t.a.APM              ] [tpotcluster-node-01] Sending apm tracing is disabled
[2024-07-11T14:13:58,941][INFO ][o.e.x.s.Security         ] [tpotcluster-node-01] Security is disabled
[2024-07-11T14:13:59,194][INFO ][o.e.x.w.Watcher          ] [tpotcluster-node-01] Watcher initialized components at 2024-07-11T14:13:59.193Z
[2024-07-11T14:13:59,262][INFO ][o.e.x.p.ProfilingPlugin  ] [tpotcluster-node-01] Profiling is enabled
[2024-07-11T14:13:59,278][INFO ][o.e.x.p.ProfilingPlugin  ] [tpotcluster-node-01] profiling index templates will not be installed or reinstalled
[2024-07-11T14:13:59,285][INFO ][o.e.x.a.APMPlugin        ] [tpotcluster-node-01] APM ingest plugin is disabled
[2024-07-11T14:13:59,729][INFO ][o.e.t.n.NettyAllocator   ] [tpotcluster-node-01] creating NettyAllocator with the following configs: [name=elasticsearch_configured, chunk_size=1mb, suggested_max_allocation_size=1mb, factors={es.unsafe.use_netty_default_chunk_and_page_size=false, g1gc_enabled=true, g1gc_region_size=4mb}]
[2024-07-11T14:13:59,753][INFO ][o.e.i.r.RecoverySettings ] [tpotcluster-node-01] using rate limit [40mb] with [default=40mb, read=0b, write=0b, max=0b]
[2024-07-11T14:13:59,787][INFO ][o.e.d.DiscoveryModule    ] [tpotcluster-node-01] using discovery type [single-node] and seed hosts providers [settings]
[2024-07-11T14:14:00,731][INFO ][o.e.n.Node               ] [tpotcluster-node-01] initialized
[2024-07-11T14:14:00,734][INFO ][o.e.n.Node               ] [tpotcluster-node-01] starting ...
[2024-07-11T14:14:00,756][INFO ][o.e.x.s.c.f.PersistentCache] [tpotcluster-node-01] persistent cache index loaded
[2024-07-11T14:14:00,757][INFO ][o.e.x.d.l.DeprecationIndexingComponent] [tpotcluster-node-01] deprecation component started
[2024-07-11T14:14:00,836][INFO ][o.e.t.TransportService   ] [tpotcluster-node-01] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300}
[2024-07-11T14:14:01,520][WARN ][o.e.b.BootstrapChecks    ] [tpotcluster-node-01] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]; for more information see [https://www.elastic.co/guide/en/elasticsearch/reference/8.14/_maximum_map_count_check.html]
[2024-07-11T14:14:01,522][INFO ][o.e.c.c.ClusterBootstrapService] [tpotcluster-node-01] this node is locked into cluster UUID [tMPlXmeKSP2GfGOcqAY1LA] and will not attempt further cluster bootstrapping
[2024-07-11T14:14:01,633][INFO ][o.e.c.s.MasterService    ] [tpotcluster-node-01] elected-as-master ([1] nodes joined in term 2)[_FINISH_ELECTION_, {tpotcluster-node-01}{w3WOKS8NQIOLL15ksnFqAQ}{BteEIyIBSzCoMFBOpnmMvQ}{tpotcluster-node-01}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{8.14.2}{7000099-8505000} completing election], term: 2, version: 144, delta: master node changed {previous [], current [{tpotcluster-node-01}{w3WOKS8NQIOLL15ksnFqAQ}{BteEIyIBSzCoMFBOpnmMvQ}{tpotcluster-node-01}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{8.14.2}{7000099-8505000}]}
[2024-07-11T14:14:01,693][INFO ][o.e.c.s.ClusterApplierService] [tpotcluster-node-01] master node changed {previous [], current [{tpotcluster-node-01}{w3WOKS8NQIOLL15ksnFqAQ}{BteEIyIBSzCoMFBOpnmMvQ}{tpotcluster-node-01}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{8.14.2}{7000099-8505000}]}, term: 2, version: 144, reason: Publication{term=2, version=144}
[2024-07-11T14:14:01,722][INFO ][o.e.c.f.AbstractFileWatchingService] [tpotcluster-node-01] starting file watcher ...
[2024-07-11T14:14:01,731][INFO ][o.e.c.f.AbstractFileWatchingService] [tpotcluster-node-01] file settings service up and running [tid=50]
[2024-07-11T14:14:01,735][INFO ][o.e.c.c.NodeJoinExecutor ] [tpotcluster-node-01] node-join: [{tpotcluster-node-01}{w3WOKS8NQIOLL15ksnFqAQ}{BteEIyIBSzCoMFBOpnmMvQ}{tpotcluster-node-01}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{8.14.2}{7000099-8505000}] with reason [completing election]
[2024-07-11T14:14:01,737][INFO ][o.e.r.s.FileSettingsService] [tpotcluster-node-01] setting file [/etc/elasticsearch/operator/settings.json] not found, initializing [file_settings] as empty
[2024-07-11T14:14:01,749][INFO ][o.e.h.AbstractHttpServerTransport] [tpotcluster-node-01] publish_address {192.168.32.3:9200}, bound_addresses {[::]:9200}
[2024-07-11T14:14:01,775][INFO ][o.e.n.Node               ] [tpotcluster-node-01] started {tpotcluster-node-01}{w3WOKS8NQIOLL15ksnFqAQ}{BteEIyIBSzCoMFBOpnmMvQ}{tpotcluster-node-01}{127.0.0.1}{127.0.0.1:9300}{cdfhilmrstw}{8.14.2}{7000099-8505000}{xpack.installed=true, transform.config_version=10.0.0, ml.config_version=12.0.0}
[2024-07-11T14:14:02,115][INFO ][o.e.l.ClusterStateLicenseService] [tpotcluster-node-01] license [66da837b-e7b9-4132-b0b2-440a56ac4c89] mode [basic] - valid
[2024-07-11T14:14:02,118][INFO ][o.e.g.GatewayService     ] [tpotcluster-node-01] recovered [30] indices into cluster_state
[2024-07-11T14:14:02,224][INFO ][o.e.h.n.s.HealthNodeTaskExecutor] [tpotcluster-node-01] Node [{tpotcluster-node-01}{w3WOKS8NQIOLL15ksnFqAQ}] is selected as the current health node.
[2024-07-11T14:14:03,812][INFO ][o.e.c.r.a.AllocationService] [tpotcluster-node-01] current.health="GREEN" message="Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.apm-source-map][0]]])." previous.health="RED" reason="shards started [[.apm-source-map][0]]"
[2024-07-11T14:14:44,604][INFO ][o.e.c.s.IndexScopedSettings] [tpotcluster-node-01] [.kibana-observability-ai-assistant-conversations-000001] updating [index.mapping.total_fields.limit] from [1000] to [10000]
[2024-07-11T14:14:44,661][INFO ][o.e.c.s.IndexScopedSettings] [tpotcluster-node-01] [.kibana-observability-ai-assistant-conversations-000001] updating [index.mapping.total_fields.limit] from [1000] to [10000]
atbohmer commented 1 month ago

Kibana:

[2024-07-11T14:14:44.769+00:00][INFO ][plugins.elasticAssistant.service] Installing index template .kibana-elastic-ai-assistant-index-template-anonymization-fields
[2024-07-11T14:14:44.815+00:00][ERROR][plugins.observabilityAIAssistant.service] Failed to initialize service: parse_exception
    Root causes:
        parse_exception: No processor type exists with name [inference]
[2024-07-11T14:14:44.815+00:00][ERROR][plugins.observabilityAIAssistant.service] Could not index 7 entries because of an initialisation error
[2024-07-11T14:14:44.816+00:00][ERROR][plugins.observabilityAIAssistant.service] ResponseError: parse_exception
t3chn0m4g3 commented 1 month ago

Have you tried the suggestion from the log you provided?

[2024-07-11T14:14:01,520][WARN ][o.e.b.BootstrapChecks ] [tpotcluster-node-01] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]; for more information see [https://www.elastic.co/guide/en/elasticsearch/reference/8.14/_maximum_map_count_check.html]

Open sudo micro /etc/sysctl.conf and add the following line ...

vm.max_map_count=262144

... then reboot.

atbohmer commented 1 month ago

Have you tried the suggestion from the log you provided?

[2024-07-11T14:14:01,520][WARN ][o.e.b.BootstrapChecks ] [tpotcluster-node-01] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]; for more information see [https://www.elastic.co/guide/en/elasticsearch/reference/8.14/_maximum_map_count_check.html]

Open sudo micro /etc/sysctl.conf and add the following line ...

vm.max_map_count=262144

... then reboot.

No but todays update woke kibana back from the death. Thanks for the suggestion, implemnted it anyhow.