nshou / elasticsearch-kibana

Simple and lightweight docker image for previewing Elasticsearch and Kibana.
https://hub.docker.com/r/nshou/elasticsearch-kibana
MIT License
46 stars 24 forks source link

elasticsearch restart failed #10

Closed cndavy closed 4 years ago

cndavy commented 4 years ago

[2019-12-17T07:32:21,944][INFO ][o.e.e.NodeEnvironment ] [node-1] using [1] data paths, mounts [[/home/elasticsearch/elasticsearch/data (/dev/mapper/datavg-lvdata)]], net usable_space [79.4gb], net total_space [196.7gb], types [ext4] [2019-12-17T07:32:21,950][INFO ][o.e.e.NodeEnvironment ] [node-1] heap size [10.9gb], compressed ordinary object pointers [true] [2019-12-17T07:33:03,265][INFO ][o.e.n.Node ] [node-1] node name [node-1], node ID [YHdphabORfq5n130gqDZdg], cluster name [elasticsearch] [2019-12-17T07:33:03,266][INFO ][o.e.n.Node ] [node-1] version[7.1.1], pid[6], build[oss/tar/7a013de/2019-05-23T14:04:00.380842Z], OS[Linux/3.10.0-327.el7.x86_64/amd64], JVM[IcedTea/OpenJDK 64-Bit Server VM/1.8.0_212/25.212-b04] [2019-12-17T07:33:03,267][INFO ][o.e.n.Node ] [node-1] JVM home [/usr/lib/jvm/java-1.8-openjdk/jre] [2019-12-17T07:33:03,267][INFO ][o.e.n.Node ] [node-1] JVM arguments [-Xms11g, -Xmx11g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -Des.networkaddress.cache.ttl=60, -Des.networkaddress.cache.negative.ttl=10, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=/home/elasticsearch/elasticsearch.tmp, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Dio.netty.allocator.type=pooled, -Des.path.home=/home/elasticsearch/elasticsearch, -Des.path.conf=/home/elasticsearch/elasticsearch/config, -Des.distribution.flavor=oss, -Des.distribution.type=tar, -Des.bundled_jdk=true] [2019-12-17T07:33:04,776][INFO ][o.e.p.PluginsService ] [node-1] loaded module [aggs-matrix-stats] [2019-12-17T07:33:04,777][INFO ][o.e.p.PluginsService ] [node-1] loaded module [analysis-common] [2019-12-17T07:33:04,777][INFO ][o.e.p.PluginsService ] [node-1] loaded module [ingest-common] [2019-12-17T07:33:04,778][INFO ][o.e.p.PluginsService ] [node-1] loaded module [ingest-geoip] [2019-12-17T07:33:04,778][INFO ][o.e.p.PluginsService ] [node-1] loaded module [ingest-user-agent] [2019-12-17T07:33:04,779][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-expression] [2019-12-17T07:33:04,779][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-mustache] [2019-12-17T07:33:04,779][INFO ][o.e.p.PluginsService ] [node-1] loaded module [lang-painless] [2019-12-17T07:33:04,780][INFO ][o.e.p.PluginsService ] [node-1] loaded module [mapper-extras] [2019-12-17T07:33:04,780][INFO ][o.e.p.PluginsService ] [node-1] loaded module [parent-join] [2019-12-17T07:33:04,780][INFO ][o.e.p.PluginsService ] [node-1] loaded module [percolator] [2019-12-17T07:33:04,781][INFO ][o.e.p.PluginsService ] [node-1] loaded module [rank-eval] [2019-12-17T07:33:04,781][INFO ][o.e.p.PluginsService ] [node-1] loaded module [reindex] [2019-12-17T07:33:04,781][INFO ][o.e.p.PluginsService ] [node-1] loaded module [repository-url] [2019-12-17T07:33:04,782][INFO ][o.e.p.PluginsService ] [node-1] loaded module [transport-netty4] [2019-12-17T07:33:04,782][INFO ][o.e.p.PluginsService ] [node-1] no plugins loaded [2019-12-17T07:33:37,099][INFO ][o.e.d.DiscoveryModule ] [node-1] using discovery type [zen] and seed hosts providers [settings] [2019-12-17T07:33:37,884][INFO ][o.e.n.Node ] [node-1] initialized [2019-12-17T07:33:37,885][INFO ][o.e.n.Node ] [node-1] starting ... [2019-12-17T07:33:38,145][INFO ][o.e.t.TransportService ] [node-1] publish_address {127.0.0.1:9300}, bound_addresses {[::1]:9300}, {127.0.0.1:9300} [2019-12-17T07:33:38,299][INFO ][o.e.c.c.Coordinator ] [node-1] cluster UUID [tDcYXOrfTMyU5_EpThEXLQ] [2019-12-17T07:33:38,690][INFO ][o.e.c.s.MasterService ] [node-1] elected-as-master ([1] nodes joined)[{node-1}{YHdphabORfq5n130gqDZdg}{P7f88KVWSLalwaJ35KBgkg}{127.0.0.1}{127.0.0.1:9300} elect leader, _BECOME_MASTERTASK, _FINISHELECTION], term: 6, version: 38715, reason: master node changed {previous [], current [{node-1}{YHdphabORfq5n130gqDZdg}{P7f88KVWSLalwaJ35KBgkg}{127.0.0.1}{127.0.0.1:9300}]} [2019-12-17T07:34:08,345][WARN ][o.e.n.Node ] [node-1] timed out while waiting for initial discovery state - timeout: 30s

[2019-12-17T07:34:08,365][INFO ][o.e.n.Node ] [node-1] started [2019-12-17T07:34:09,539][DEBUG][o.e.a.a.i.g.TransportGetIndexAction] [node-1] no known master node, scheduling a retry [2019-12-17T07:34:38,457][INFO ][o.e.c.c.JoinHelper ] [node-1] failed to join {node-1}{YHdphabORfq5n130gqDZdg}{P7f88KVWSLalwaJ35KBgkg}{127.0.0.1}{127.0.0.1:9300} with JoinRequest{sourceNode={node-1}{YHdphabORfq5n130gqDZdg}{P7f88KVWSLalwaJ35KBgkg}{127.0.0.1}{127.0.0.1:9300}, optionalJoin=Optional[Join{term=6, lastAcceptedTerm=1, lastAcceptedVersion=38714, sourceNode={node-1}{YHdphabORfq5n130gqDZdg}{P7f88KVWSLalwaJ35KBgkg}{127.0.0.1}{127.0.0.1:9300}, targetNode={node-1}{YHdphabORfq5n130gqDZdg}{P7f88KVWSLalwaJ35KBgkg}{127.0.0.1}{127.0.0.1:9300}}]} org.elasticsearch.transport.ReceiveTimeoutTransportException: [node-1][127.0.0.1:9300][internal:cluster/coordination/join] request_id [3] timed out after [60050ms] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1026) [elasticsearch-7.1.1.jar:7.1.1] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) [elasticsearch-7.1.1.jar:7.1.1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212] [2019-12-17T07:34:38,467][INFO ][o.e.c.c.JoinHelper ] [node-1] failed to join {node-1}{YHdphabORfq5n130gqDZdg}{P7f88KVWSLalwaJ35KBgkg}{127.0.0.1}{127.0.0.1:9300} with JoinRequest{sourceNode={node-1}{YHdphabORfq5n130gqDZdg}{P7f88KVWSLalwaJ35KBgkg}{127.0.0.1}{127.0.0.1:9300}, optionalJoin=Optional[Join{term=6, lastAcceptedTerm=1, lastAcceptedVersion=38714, sourceNode={node-1}{YHdphabORfq5n130gqDZdg}{P7f88KVWSLalwaJ35KBgkg}{127.0.0.1}{127.0.0.1:9300}, targetNode={node-1}{YHdphabORfq5n130gqDZdg}{P7f88KVWSLalwaJ35KBgkg}{127.0.0.1}{127.0.0.1:9300}}]} org.elasticsearch.transport.ReceiveTimeoutTransportException: [node-1][127.0.0.1:9300][internal:cluster/coordination/join] request_id [3] timed out after [60050ms] at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:1026) [elasticsearch-7.1.1.jar:7.1.1] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) [elasticsearch-7.1.1.jar:7.1.1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212] [2019-12-17T07:34:39,539][DEBUG][o.e.a.a.i.g.TransportGetIndexAction] [node-1] timed out while retrying [indices:admin/get] after failure (timeout [30s]) [2019-12-17T07:34:39,541][WARN ][r.suppressed ] [node-1] path: /.kibana, params: {index=.kibana} org.elasticsearch.discovery.MasterNotDiscoveredException: null at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$4.onTimeout(TransportMasterNodeAction.java:259) [elasticsearch-7.1.1.jar:7.1.1] at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:322) [elasticsearch-7.1.1.jar:7.1.1] at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:249) [elasticsearch-7.1.1.jar:7.1.1] at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:555) [elasticsearch-7.1.1.jar:7.1.1] at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) [elasticsearch-7.1.1.jar:7.1.1] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_212] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_212] at java.lang.Thread.run(Thread.java:748) [?:1.8.0_212]

cndavy commented 4 years ago

**# NOTE: Elasticsearch comes with reasonable defaults for most settings.

Before you set out to tweak and tune the configuration, make sure you

understand what are you trying to accomplish and the consequences.

#

The primary way of configuring a node is via this file. This template lists

the most important settings you may want to configure for a production cluster.

#

Please consult the documentation for further information on configuration options:

https://www.elastic.co/guide/en/elasticsearch/reference/index.html

#

---------------------------------- Cluster -----------------------------------

#

Use a descriptive name for your cluster:

# cluster.name: elasticsearch #

------------------------------------ Node ------------------------------------

#

Use a descriptive name for the node:

#

node.name: node-1

#

Add custom attributes to the node:

#

node.attr.rack: r1

#

----------------------------------- Paths ------------------------------------

#

Path to directory where to store the data (separate multiple locations by comma):

#

path.data: /path/to/data

#

Path to log files:

#

path.logs: /path/to/logs

#

----------------------------------- Memory -----------------------------------

Use a descriptive name for the node:

# node.name: node-1 #

Add custom attributes to the node:

#

node.attr.rack: r1

#

----------------------------------- Paths ------------------------------------

#

Path to directory where to store the data (separate multiple locations by comma):

#

path.data: /path/to/data

#

Path to log files:

#

path.logs: /path/to/logs

#

----------------------------------- Memory -----------------------------------

#

Lock the memory on startup:

# bootstrap.memory_lock: true #

Make sure that the heap size is set to about half the memory available

on the system and that the owner of the process is allowed to use this

limit.

#

Elasticsearch performs poorly when the system is swapping the memory.

#

---------------------------------- Network -----------------------------------

#

Set the bind address to a specific IP (IPv4 or IPv6):

#

network.host: 192.168.0.1

#

Set a custom port for HTTP:

#

http.port: 9200

#

For more information, consult the network module documentation.

#

--------------------------------- Discovery ----------------------------------

#

Pass an initial list of hosts to perform discovery when this node is started:

The default list of hosts is ["127.0.0.1", "[::1]"]

#

discovery.seed_hosts: ["host1", "host2"]

#

Bootstrap the cluster using an initial set of master-eligible nodes:

# cluster.initial_master_nodes: ["node-1"] #

For more information, consult the discovery and cluster formation module documentation.

#

---------------------------------- Gateway -----------------------------------

#

Block initial recovery after a full cluster restart until N nodes are started:

# gateway.recover_after_nodes: 1 #

For more information, consult the gateway module documentation.

#

---------------------------------- Various -----------------------------------

#

Require explicit names when deleting indices:

#

action.destructive_requires_name: true

indices.fielddata.cache.size: 2g**

nshou commented 4 years ago

Hello, could you tell me how we can reproduce this issue, especially the way you restarted ES? The command lines you used are enough.

nshou commented 4 years ago

I'll be closing this if there is no more concern about it.