dadoonet / fscrawler

Elasticsearch File System Crawler (FS Crawler)
https://fscrawler.readthedocs.io/
Apache License 2.0
1.35k stars 300 forks source link

Is there any limit on level of subfolders which crawler can scan? #772

Closed Neel-Gagan closed 4 years ago

Neel-Gagan commented 5 years ago

ES Version 6.8.0 Fscrawler 2.6.

i am facing issue of Got a hard failure when executing the bulk request when i am doing indexing of folder which is very much deep inside sub folders. ( eg. Folder1/Folder 2/ Folder 3/ Folder 5/ Folder 5 /folder 6) . when i am separately doing indexing of the folder where i am getting the error "hard failure". indexing is performed without any issue, but inside sub folders it is causing this error. i want to know is there any limit on the no of characters in filename which crawler can crawl ?

dadoonet commented 5 years ago

I have test which is pretty much stable at https://github.com/dadoonet/fscrawler/blob/c14e588802b02f9959176b88a290ae81602b1fbd/integration-tests/it-common/src/main/java/fr/pilato/elasticsearch/crawler/fs/test/integration/FsCrawlerTestSubDirsIT.java#L122

It can have up to 100 subdirectories. Each dir name has up to 5 chars.

So I don't know if there's a limit (probably there is one).

What kind of error are you seeing ?

Neel-Gagan commented 5 years ago

before the bulk failure error . in crawler logs i see computing virutal name ..../folder 1 / folder 2 / ...... it ends with dots not showing complete folder name. but when the run is performed on a separately on that folder it does indexing nicely.

Each dir name has up to 5 chars. would there be issue if the directory name has more than 5 characters ?

dadoonet commented 5 years ago

before the bulk failure error . in crawler logs i see computing virutal name

Can you share the logs please?

Each dir name has up to 5 chars. would there be issue if the directory name has more than 5 characters ?

No. I don't think so.

Neel-Gagan commented 5 years ago

FScrawler Logs:

15:56:27,499 DEBUG [f.p.e.c.f.FsParserAbstract] Indexing d_march_2019/_doc/5cb7a4f367fbd22a3e6a5d43c4d6b9c?pipeline=null
15:56:27,499 DEBUG [f.p.e.c.f.FsParserAbstract] Looking for removed files in [D:\Test Reports Folder\Folder A\Folder B 2019\Mar 2019\22032019\Report Dated 22 March 2019\Report-2  22-03-2019]...
15:56:28,531 WARN  [f.p.e.c.f.FsParserAbstract] Error while crawling D:\Test Reports Folder\Folder A\Folder B 2019\Mar 2019: Connection refused: no further information
15:56:28,531 WARN  [f.p.e.c.f.FsParserAbstract] Full stacktrace
java.net.ConnectException: Connection refused: no further information
    at org.elasticsearch.client.RestClient$SyncResponseListener.get(RestClient.java:949) ~[elasticsearch-rest-client-6.5.3.jar:6.5.3]
    at org.elasticsearch.client.RestClient.performRequest(RestClient.java:229) ~[elasticsearch-rest-client-6.5.3.jar:6.5.3]
    at org.elasticsearch.client.RestHighLevelClient.internalPerformRequest(RestHighLevelClient.java:1593) ~[elasticsearch-rest-high-level-client-6.5.3.jar:6.5.3]
    at org.elasticsearch.client.RestHighLevelClient.performRequest(RestHighLevelClient.java:1563) ~[elasticsearch-rest-high-level-client-6.5.3.jar:6.5.3]
    at org.elasticsearch.client.RestHighLevelClient.performRequestAndParseEntity(RestHighLevelClient.java:1525) ~[elasticsearch-rest-high-level-client-6.5.3.jar:6.5.3]
    at org.elasticsearch.client.RestHighLevelClient.search(RestHighLevelClient.java:990) ~[elasticsearch-rest-high-level-client-6.5.3.jar:6.5.3]
    at fr.pilato.elasticsearch.crawler.fs.client.v6.ElasticsearchClientV6.search(ElasticsearchClientV6.java:482) ~[fscrawler-elasticsearch-client-v6-2.6.jar:?]
    at fr.pilato.elasticsearch.crawler.fs.FsParserAbstract.getFileDirectory(FsParserAbstract.java:363) ~[fscrawler-core-2.6.jar:?]
    at fr.pilato.elasticsearch.crawler.fs.FsParserAbstract.addFilesRecursively(FsParserAbstract.java:317) ~[fscrawler-core-2.6.jar:?]
    at fr.pilato.elasticsearch.crawler.fs.FsParserAbstract.addFilesRecursively(FsParserAbstract.java:299) ~[fscrawler-core-2.6.jar:?]
    at fr.pilato.elasticsearch.crawler.fs.FsParserAbstract.addFilesRecursively(FsParserAbstract.java:299) ~[fscrawler-core-2.6.jar:?]
    at fr.pilato.elasticsearch.crawler.fs.FsParserAbstract.addFilesRecursively(FsParserAbstract.java:299) ~[fscrawler-core-2.6.jar:?]
    at fr.pilato.elasticsearch.crawler.fs.FsParserAbstract.run(FsParserAbstract.java:157) [fscrawler-core-2.6.jar:?]
    at java.lang.Thread.run(Unknown Source) [?:1.8.0_171]
Caused by: java.net.ConnectException: Connection refused: no further information
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:1.8.0_171]
    at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source) ~[?:1.8.0_171]
    at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:171) ~[httpcore-nio-4.4.5.jar:4.4.5]
    at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:145) ~[httpcore-nio-4.4.5.jar:4.4.5]
    at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348) ~[httpcore-nio-4.4.5.jar:4.4.5]
    at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:192) ~[httpasyncclient-4.1.2.jar:4.1.2]
    at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) ~[httpasyncclient-4.1.2.jar:4.1.2]
    ... 1 more
15:56:28,531 INFO  [f.p.e.c.f.FsParserAbstract] FS crawler is stopping after 1 run
15:56:28,562 DEBUG [f.p.e.c.f.FsCrawlerImpl] Closing FS crawler [c_march_2019]
15:56:28,562 DEBUG [f.p.e.c.f.FsCrawlerImpl] FS crawler thread is now stopped
15:56:28,562 DEBUG [f.p.e.c.f.c.v.ElasticsearchClientV6] Closing Elasticsearch client manager
15:56:29,593 WARN  [f.p.e.c.f.c.v.ElasticsearchClientV6] Got a hard failure when executing the bulk request
java.net.ConnectException: Connection refused: no further information
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) ~[?:1.8.0_171]
    at sun.nio.ch.SocketChannelImpl.finishConnect(Unknown Source) ~[?:1.8.0_171]
    at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvent(DefaultConnectingIOReactor.java:171) [httpcore-nio-4.4.5.jar:4.4.5]
    at org.apache.http.impl.nio.reactor.DefaultConnectingIOReactor.processEvents(DefaultConnectingIOReactor.java:145) [httpcore-nio-4.4.5.jar:4.4.5]
    at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor.execute(AbstractMultiworkerIOReactor.java:348) [httpcore-nio-4.4.5.jar:4.4.5]
    at org.apache.http.impl.nio.conn.PoolingNHttpClientConnectionManager.execute(PoolingNHttpClientConnectionManager.java:192) [httpasyncclient-4.1.2.jar:4.1.2]
    at org.apache.http.impl.nio.client.CloseableHttpAsyncClientBase$1.run(CloseableHttpAsyncClientBase.java:64) [httpasyncclient-4.1.2.jar:4.1.2]
    at java.lang.Thread.run(Unknown Source) [?:1.8.0_171]
15:56:29,625 DEBUG [f.p.e.c.f.FsCrawlerImpl] ES Client Manager stopped
15:56:29,625 INFO  [f.p.e.c.f.FsCrawlerImpl] FS crawler [d_march_2019] stopped
15:56:29,625 DEBUG [f.p.e.c.f.FsCrawlerImpl] Closing FS crawler [c_march_2019]
15:56:29,625 DEBUG [f.p.e.c.f.FsCrawlerImpl] FS crawler thread is now stopped
15:56:29,625 DEBUG [f.p.e.c.f.c.v.ElasticsearchClientV6] Closing Elasticsearch client manager
15:56:29,625 DEBUG [f.p.e.c.f.FsCrawlerImpl] ES Client Manager stopped
15:56:29,625 INFO  [f.p.e.c.f.FsCrawlerImpl] FS crawler [d_march_2019] stopped

Setttings file

{
  "name" : "d_march_2019",
  "fs" : {
    "url" : "D:\\Test Reports Folder",
    "update_rate" : "15m",
    "excludes" : [ 
        "*/*.caf",
        "*/*.css",
        "*/*.js",
        "*/*.eot",
        "*/*.svg",
        "*/*.ttf",
        "*/*.woff",
        "*/*.woff2",
        "*/*.opus"  
    ],
    "json_support" : false,
    "filename_as_id" : false,
    "add_filesize" : true,
    "remove_deleted" : true,
    "add_as_inner_object" : false,
    "store_source" : false,
    "index_content" : true,
    "attributes_support" : false,
    "raw_metadata" : true,
    "xml_support" : false,
    "index_folders" : true,
    "lang_detect" : false,
    "indexed_chars" : "-1",
    "continue_on_error" : true, 
    "pdf_ocr" : true,   
    "ocr" : {
      "language" : "eng"

    }
  },
  "elasticsearch" : {
    "nodes" : [ {
      "host" : "127.0.0.1",
      "port" : 9200,
      "scheme" : "HTTP"
    } ],
    "bulk_size" : 100,
    "flush_interval" : "5s",
    "byte_size" : "10mb"
  },
  "rest" : {
    "scheme" : "HTTP",
    "host" : "127.0.0.1",
    "port" : 8080,
    "endpoint" : "fscrawler"
  }
}
dadoonet commented 5 years ago

That's strange. Connection refused: no further information seems to indicate that the cluster was not available. I need to implement a retry mechanism probably. But I'm curious if you are seeing anything in elasticsearch logs at the same period.

Neel-Gagan commented 5 years ago

I am attaching ES logs for reference

[2019-06-27T09:57:19,715][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node_1] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[2019-06-27T13:27:46,857][INFO ][o.e.n.Node               ] [node_1] stopping ...
[2019-06-27T13:27:46,904][INFO ][o.e.x.w.WatcherService   ] [node_1] stopping watch service, reason [shutdown initiated]
[2019-06-27T13:27:48,263][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node_1] [controller/2684] [Main.cc@148] Ml controller exiting
[2019-06-27T13:27:48,373][INFO ][o.e.x.m.p.NativeController] [node_1] Native controller process has stopped - no new native processes can be started
[2019-06-27T13:27:49,639][DEBUG][o.e.a.a.c.n.s.TransportNodesStatsAction] [node_1] failed to execute on node [ytQq3gaNQ3W0hDcyFf5X3Q]
org.elasticsearch.transport.SendRequestTransportException: [node_1][127.0.0.1:9300][cluster:monitor/nodes/stats[n]]
    at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:644) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$1.sendRequest(SecurityServerTransportInterceptor.java:136) ~[?:?]
    at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:542) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:530) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.start(TransportNodesAction.java:194) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:91) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.action.support.nodes.TransportNodesAction.doExecute(TransportNodesAction.java:54) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:167) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.apply(SecurityActionFilter.java:121) ~[?:?]
    at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:165) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:81) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:87) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:76) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:403) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.client.support.AbstractClient$ClusterAdmin.execute(AbstractClient.java:719) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.client.support.AbstractClient$ClusterAdmin.nodesStats(AbstractClient.java:822) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.cluster.InternalClusterInfoService.updateNodeStats(InternalClusterInfoService.java:252) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.cluster.InternalClusterInfoService.refresh(InternalClusterInfoService.java:288) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.cluster.InternalClusterInfoService.maybeRefresh(InternalClusterInfoService.java:273) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.cluster.InternalClusterInfoService.access$200(InternalClusterInfoService.java:65) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.cluster.InternalClusterInfoService$SubmitReschedulingClusterInfoUpdatedJob.lambda$run$0(InternalClusterInfoService.java:220) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) ~[elasticsearch-6.8.0.jar:6.8.0]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:1.8.0_171]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:1.8.0_171]
    at java.lang.Thread.run(Unknown Source) [?:1.8.0_171]
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
    at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:626) ~[elasticsearch-6.8.0.jar:6.8.0]
    ... 25 more
[2019-06-27T13:27:49,639][DEBUG][o.e.a.a.i.s.TransportIndicesStatsAction] [node_1] failed to execute [indices:monitor/stats] on node [ytQq3gaNQ3W0hDcyFf5X3Q]
org.elasticsearch.transport.SendRequestTransportException: [node_1][127.0.0.1:9300][indices:monitor/stats[n]]
    at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:644) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.xpack.security.transport.SecurityServerTransportInterceptor$1.sendRequest(SecurityServerTransportInterceptor.java:136) ~[?:?]
    at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:542) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:517) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.sendNodeRequest(TransportBroadcastByNodeAction.java:324) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction$AsyncAction.start(TransportBroadcastByNodeAction.java:313) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:236) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.action.support.broadcast.node.TransportBroadcastByNodeAction.doExecute(TransportBroadcastByNodeAction.java:78) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:167) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.xpack.security.action.filter.SecurityActionFilter.apply(SecurityActionFilter.java:121) ~[?:?]
    at org.elasticsearch.action.support.TransportAction$RequestFilterChain.proceed(TransportAction.java:165) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:139) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:81) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.client.node.NodeClient.executeLocally(NodeClient.java:87) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:76) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:403) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.execute(AbstractClient.java:1269) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.client.support.AbstractClient$IndicesAdmin.stats(AbstractClient.java:1591) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.cluster.InternalClusterInfoService.updateIndicesStats(InternalClusterInfoService.java:266) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.cluster.InternalClusterInfoService.refresh(InternalClusterInfoService.java:317) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.cluster.InternalClusterInfoService.maybeRefresh(InternalClusterInfoService.java:273) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.cluster.InternalClusterInfoService.access$200(InternalClusterInfoService.java:65) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.cluster.InternalClusterInfoService$SubmitReschedulingClusterInfoUpdatedJob.lambda$run$0(InternalClusterInfoService.java:220) ~[elasticsearch-6.8.0.jar:6.8.0]
    at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:681) ~[elasticsearch-6.8.0.jar:6.8.0]
    at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:1.8.0_171]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:1.8.0_171]
    at java.lang.Thread.run(Unknown Source) [?:1.8.0_171]
Caused by: org.elasticsearch.transport.TransportException: TransportService is closed stopped can't send request
    at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:626) ~[elasticsearch-6.8.0.jar:6.8.0]
    ... 26 more
[2019-06-27T13:27:51,639][INFO ][o.e.n.Node               ] [node_1] stopped
[2019-06-27T13:27:51,639][INFO ][o.e.n.Node               ] [node_1] closing ...
[2019-06-27T13:27:51,685][INFO ][o.e.n.Node               ] [node_1] closed
[2019-06-27T13:29:23,586][WARN ][o.e.c.l.LogConfigurator  ] [node_1] Some logging configurations have %marker but don't have %node_name. We will automatically add %node_name to the pattern to ease the migration for users who customize log4j2.properties but will stop this behavior in 7.0. You should manually replace `%node_name` with `[%node_name]%marker ` in these locations:
  F:\ELK SETUP\elasticsearch-6.6.2\config\log4j2.properties
[2019-06-27T13:29:25,055][INFO ][o.e.e.NodeEnvironment    ] [node_1] using [1] data paths, mounts [[Disk2B (F:)]], net usable_space [113.2gb], net total_space [390.6gb], types [NTFS]
[2019-06-27T13:29:25,055][INFO ][o.e.e.NodeEnvironment    ] [node_1] heap size [990.7mb], compressed ordinary object pointers [true]
[2019-06-27T13:29:43,275][INFO ][o.e.n.Node               ] [node_1] node name [node_1], node ID [ytQq3gaNQ3W0hDcyFf5X3Q]
[2019-06-27T13:29:43,275][INFO ][o.e.n.Node               ] [node_1] version[6.8.0], pid[904], build[default/zip/65b6179/2019-05-15T20:06:13.172855Z], OS[Windows 10/10.0/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_171/25.171-b11]
[2019-06-27T13:29:43,275][INFO ][o.e.n.Node               ] [node_1] JVM arguments [-Xms1g, -Xmx1g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Djava.io.tmpdir=C:\Users\ADMINI~1\AppData\Local\Temp\elasticsearch, -XX:+HeapDumpOnOutOfMemoryError, -XX:HeapDumpPath=data, -XX:ErrorFile=logs/hs_err_pid%p.log, -XX:+PrintGCDetails, -XX:+PrintGCDateStamps, -XX:+PrintTenuringDistribution, -XX:+PrintGCApplicationStoppedTime, -Xloggc:logs/gc.log, -XX:+UseGCLogFileRotation, -XX:NumberOfGCLogFiles=32, -XX:GCLogFileSize=64m, -Delasticsearch, -Des.path.home=F:\ELK SETUP\elasticsearch-6.6.2, -Des.path.conf=F:\ELK SETUP\elasticsearch-6.6.2\config, -Des.distribution.flavor=default, -Des.distribution.type=zip, exit, abort, -Xms1024m, -Xmx1024m, -Xss1024k]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [aggs-matrix-stats]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [analysis-common]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [ingest-common]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [ingest-geoip]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [ingest-user-agent]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [lang-expression]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [lang-mustache]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [lang-painless]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [mapper-extras]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [parent-join]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [percolator]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [rank-eval]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [reindex]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [repository-url]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [transport-netty4]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [tribe]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [x-pack-ccr]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [x-pack-core]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [x-pack-deprecation]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [x-pack-graph]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [x-pack-ilm]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [x-pack-logstash]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [x-pack-ml]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [x-pack-monitoring]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [x-pack-rollup]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [x-pack-security]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [x-pack-sql]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [x-pack-upgrade]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] loaded module [x-pack-watcher]
[2019-06-27T13:29:49,120][INFO ][o.e.p.PluginsService     ] [node_1] no plugins loaded
[2019-06-27T13:29:59,043][INFO ][o.e.x.s.a.s.FileRolesStore] [node_1] parsed [0] roles from file [F:\ELK SETUP\elasticsearch-6.8.0\config\roles.yml]
[2019-06-27T13:30:00,371][INFO ][o.e.x.m.p.l.CppLogMessageHandler] [node_1] [controller/7704] [Main.cc@109] controller (64 bit): Version 6.8.0 (Build e6cf25e2acc5ec) Copyright (c) 2019 Elasticsearch BV
[2019-06-27T13:30:00,793][DEBUG][o.e.a.ActionModule       ] [node_1] Using REST wrapper from plugin org.elasticsearch.xpack.security.Security
[2019-06-27T13:30:02,793][INFO ][o.e.d.DiscoveryModule    ] [node_1] using discovery type [zen] and host providers [settings]
[2019-06-27T13:30:03,465][INFO ][o.e.n.Node               ] [node_1] initialized
[2019-06-27T13:30:03,465][INFO ][o.e.n.Node               ] [node_1] starting ...
[2019-06-27T13:30:03,934][INFO ][o.e.t.TransportService   ] [node_1] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}
[2019-06-27T13:30:07,215][INFO ][o.e.c.s.MasterService    ] [node_1] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {node_1}{ytQq3gaNQ3W0hDcyFf5X3Q}{FoU9uYm0To6qDRMcJGE1Iw}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=25627275264, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
[2019-06-27T13:30:07,215][INFO ][o.e.c.s.ClusterApplierService] [node_1] new_master {node_1}{ytQq3gaNQ3W0hDcyFf5X3Q}{FoU9uYm0To6qDRMcJGE1Iw}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=25627275264, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {node_1}{ytQq3gaNQ3W0hDcyFf5X3Q}{FoU9uYm0To6qDRMcJGE1Iw}{127.0.0.1}{127.0.0.1:9300}{ml.machine_memory=25627275264, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2019-06-27T13:30:07,544][INFO ][o.e.h.n.Netty4HttpServerTransport] [node_1] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}
[2019-06-27T13:30:07,544][INFO ][o.e.n.Node               ] [node_1] started
[2019-06-27T13:30:16,482][WARN ][o.e.x.s.a.s.m.NativeRoleMappingStore] [node_1] Failed to clear cache for realms [[]]
[2019-06-27T13:30:16,560][INFO ][o.e.l.LicenseService     ] [node_1] license [3b5dfbce-916d-48ff-b9b3-95bdde2e2a89] mode [basic] - valid
[2019-06-27T13:30:16,576][INFO ][o.e.g.GatewayService     ] [node_1] recovered [94] indices into cluster_state
[2019-06-27T13:33:22,501][INFO ][o.e.c.r.a.AllocationService] [node_1] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana_1][0]] ...]).
[2019-06-27T13:35:23,504][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node_1] adding template [.management-beats] for index patterns [.management-beats]
[2019-06-27T13:35:25,226][INFO ][o.e.c.m.MetaDataIndexTemplateService] [node_1] adding template [kibana_index_template:.kibana] for index patterns [.kibana]
[2019-06-27T13:42:24,494][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][736] overhead, spent [863ms] collecting in the last [1s]
[2019-06-27T13:43:02,263][INFO ][o.e.c.m.MetaDataCreateIndexService] [node_1] [d_march_2019] creating index, cause [api], templates [], shards [5]/[1], mappings [_doc]
[2019-06-27T13:43:03,367][INFO ][o.e.c.m.MetaDataCreateIndexService] [node_1] [d_march_2019_folder] creating index, cause [api], templates [], shards [5]/[1], mappings [_doc]
[2019-06-27T13:47:39,581][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][1050] overhead, spent [945ms] collecting in the last [1.1s]
[2019-06-27T13:54:10,636][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][1440] overhead, spent [866ms] collecting in the last [1.6s]
[2019-06-27T14:39:43,054][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T14:39:43,132][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T14:39:43,179][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T14:39:48,164][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T14:41:34,752][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4264] overhead, spent [1s] collecting in the last [1.4s]
[2019-06-27T14:41:43,315][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T14:41:44,893][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4274] overhead, spent [672ms] collecting in the last [1s]
[2019-06-27T14:41:45,456][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T14:41:45,456][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T14:41:47,128][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4276] overhead, spent [849ms] collecting in the last [1.2s]
[2019-06-27T14:41:50,832][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T14:42:02,177][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4291] overhead, spent [823ms] collecting in the last [1s]
[2019-06-27T14:42:22,272][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4311] overhead, spent [846ms] collecting in the last [1s]
[2019-06-27T14:42:37,368][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4326] overhead, spent [806ms] collecting in the last [1s]
[2019-06-27T14:43:15,763][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T14:43:15,794][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T14:43:32,124][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4380] overhead, spent [690ms] collecting in the last [1.6s]
[2019-06-27T14:43:35,702][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T14:43:37,140][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4385] overhead, spent [668ms] collecting in the last [1s]
[2019-06-27T14:43:46,844][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4394] overhead, spent [851ms] collecting in the last [1.6s]
[2019-06-27T14:43:59,924][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4407] overhead, spent [662ms] collecting in the last [1s]
[2019-06-27T14:44:11,456][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4418] overhead, spent [670ms] collecting in the last [1.5s]
[2019-06-27T14:44:21,535][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4428] overhead, spent [688ms] collecting in the last [1s]
[2019-06-27T14:44:22,645][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4429] overhead, spent [913ms] collecting in the last [1.1s]
[2019-06-27T14:44:31,693][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4438] overhead, spent [668ms] collecting in the last [1s]
[2019-06-27T14:44:36,740][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4443] overhead, spent [665ms] collecting in the last [1s]
[2019-06-27T14:44:47,804][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4454] overhead, spent [788ms] collecting in the last [1s]
[2019-06-27T14:44:57,461][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4463] overhead, spent [668ms] collecting in the last [1.6s]
[2019-06-27T14:46:06,156][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T14:47:12,116][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4597] overhead, spent [818ms] collecting in the last [1.3s]
[2019-06-27T14:48:56,858][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4701] overhead, spent [813ms] collecting in the last [1.4s]
[2019-06-27T14:49:03,015][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4707] overhead, spent [1s] collecting in the last [1.1s]
[2019-06-27T14:49:13,048][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4717] overhead, spent [673ms] collecting in the last [1s]
[2019-06-27T14:49:17,658][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4721] overhead, spent [674ms] collecting in the last [1.5s]
[2019-06-27T14:49:27,690][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4731] overhead, spent [737ms] collecting in the last [1s]
[2019-06-27T14:49:42,723][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4746] overhead, spent [728ms] collecting in the last [1s]
[2019-06-27T14:49:47,426][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4750] overhead, spent [706ms] collecting in the last [1.6s]
[2019-06-27T14:49:56,912][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4759] overhead, spent [674ms] collecting in the last [1.4s]
[2019-06-27T14:50:43,026][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4805] overhead, spent [675ms] collecting in the last [1s]
[2019-06-27T14:51:06,357][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T14:51:18,124][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4840] overhead, spent [699ms] collecting in the last [1s]
[2019-06-27T14:51:32,707][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4854] overhead, spent [686ms] collecting in the last [1.4s]
[2019-06-27T14:51:42,724][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4864] overhead, spent [754ms] collecting in the last [1s]
[2019-06-27T14:52:02,836][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4884] overhead, spent [816ms] collecting in the last [1s]
[2019-06-27T14:52:06,445][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T14:52:12,477][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4893] overhead, spent [693ms] collecting in the last [1.6s]
[2019-06-27T14:52:22,510][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4903] overhead, spent [669ms] collecting in the last [1s]
[2019-06-27T14:53:13,312][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4953] overhead, spent [821ms] collecting in the last [1.6s]
[2019-06-27T14:53:58,379][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][4998] overhead, spent [802ms] collecting in the last [1s]
[2019-06-27T14:54:08,412][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][5008] overhead, spent [780ms] collecting in the last [1s]
[2019-06-27T14:54:22,491][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][5022] overhead, spent [700ms] collecting in the last [1s]
[2019-06-27T14:54:33,555][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][5033] overhead, spent [769ms] collecting in the last [1s]
[2019-06-27T14:55:23,623][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][5083] overhead, spent [687ms] collecting in the last [1s]
[2019-06-27T14:55:28,170][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][5087] overhead, spent [673ms] collecting in the last [1.5s]
[2019-06-27T14:55:33,186][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][5092] overhead, spent [807ms] collecting in the last [1s]
[2019-06-27T14:55:43,266][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][5102] overhead, spent [668ms] collecting in the last [1s]
[2019-06-27T14:55:54,314][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][5113] overhead, spent [817ms] collecting in the last [1s]
[2019-06-27T14:56:03,330][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][5122] overhead, spent [694ms] collecting in the last [1s]
[2019-06-27T14:56:23,379][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][5142] overhead, spent [684ms] collecting in the last [1s]
[2019-06-27T14:56:33,162][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][5151] overhead, spent [808ms] collecting in the last [1.7s]
[2019-06-27T14:56:47,601][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][5165] overhead, spent [682ms] collecting in the last [1.3s]
[2019-06-27T14:56:53,617][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][5171] overhead, spent [881ms] collecting in the last [1s]
[2019-06-27T14:57:03,649][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][5181] overhead, spent [721ms] collecting in the last [1s]
[2019-06-27T14:57:54,965][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][5232] overhead, spent [667ms] collecting in the last [1s]
[2019-06-27T15:04:32,710][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][5628] overhead, spent [810ms] collecting in the last [1.6s]
[2019-06-27T15:11:53,575][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6066] overhead, spent [700ms] collecting in the last [1.4s]
[2019-06-27T15:13:56,076][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6187] overhead, spent [1s] collecting in the last [1.4s]
[2019-06-27T15:14:14,859][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6205] overhead, spent [901ms] collecting in the last [1.6s]
[2019-06-27T15:14:21,875][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6212] overhead, spent [755ms] collecting in the last [1s]
[2019-06-27T15:14:22,891][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6213] overhead, spent [898ms] collecting in the last [1s]
[2019-06-27T15:14:24,126][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6214] overhead, spent [826ms] collecting in the last [1.2s]
[2019-06-27T15:16:00,646][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6310] overhead, spent [850ms] collecting in the last [1.1s]
[2019-06-27T15:16:01,662][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6311] overhead, spent [710ms] collecting in the last [1s]
[2019-06-27T15:16:02,709][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6312] overhead, spent [812ms] collecting in the last [1s]
[2019-06-27T15:16:05,741][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6315] overhead, spent [842ms] collecting in the last [1s]
[2019-06-27T15:16:09,757][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6319] overhead, spent [678ms] collecting in the last [1s]
[2019-06-27T15:17:15,920][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6385] overhead, spent [690ms] collecting in the last [1s]
[2019-06-27T15:18:31,011][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6460] overhead, spent [860ms] collecting in the last [1s]
[2019-06-27T15:19:14,382][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6503] overhead, spent [843ms] collecting in the last [1.2s]
[2019-06-27T15:19:15,445][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6504] overhead, spent [927ms] collecting in the last [1s]
[2019-06-27T15:19:25,806][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6514] overhead, spent [868ms] collecting in the last [1.3s]
[2019-06-27T15:19:44,858][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6533] overhead, spent [798ms] collecting in the last [1s]
[2019-06-27T15:19:54,688][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6542] overhead, spent [857ms] collecting in the last [1.8s]
[2019-06-27T15:20:09,706][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6557] overhead, spent [773ms] collecting in the last [1s]
[2019-06-27T15:20:59,365][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6606] overhead, spent [763ms] collecting in the last [1.6s]
[2019-06-27T15:21:00,375][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6607] overhead, spent [925ms] collecting in the last [1s]
[2019-06-27T15:21:31,855][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6638] overhead, spent [726ms] collecting in the last [1.4s]
[2019-06-27T15:21:40,462][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6646] overhead, spent [1.4s] collecting in the last [1.6s]
[2019-06-27T15:21:41,466][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6647] overhead, spent [791ms] collecting in the last [1s]
[2019-06-27T15:21:45,473][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6651] overhead, spent [704ms] collecting in the last [1s]
[2019-06-27T15:22:06,287][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6671] overhead, spent [1.6s] collecting in the last [1.8s]
[2019-06-27T15:23:15,336][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6740] overhead, spent [848ms] collecting in the last [1s]
[2019-06-27T15:23:40,558][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6765] overhead, spent [685ms] collecting in the last [1.2s]
[2019-06-27T15:26:16,743][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6921] overhead, spent [718ms] collecting in the last [1s]
[2019-06-27T15:27:04,771][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][6969] overhead, spent [852ms] collecting in the last [1s]
[2019-06-27T15:27:39,471][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7003] overhead, spent [693ms] collecting in the last [1.6s]
[2019-06-27T15:27:55,507][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7019] overhead, spent [948ms] collecting in the last [1s]
[2019-06-27T15:28:00,510][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7024] overhead, spent [705ms] collecting in the last [1s]
[2019-06-27T15:28:05,233][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7028] overhead, spent [751ms] collecting in the last [1.7s]
[2019-06-27T15:28:30,278][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7053] overhead, spent [722ms] collecting in the last [1s]
[2019-06-27T15:29:17,548][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7100] overhead, spent [696ms] collecting in the last [1.2s]
[2019-06-27T15:29:55,570][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7138] overhead, spent [847ms] collecting in the last [1s]
[2019-06-27T15:29:56,571][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7139] overhead, spent [866ms] collecting in the last [1s]
[2019-06-27T15:30:25,588][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7168] overhead, spent [703ms] collecting in the last [1s]
[2019-06-27T15:30:35,142][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7177] overhead, spent [783ms] collecting in the last [1.5s]
[2019-06-27T15:30:54,428][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7196] overhead, spent [750ms] collecting in the last [1.2s]
[2019-06-27T15:31:00,561][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7202] overhead, spent [1s] collecting in the last [1.1s]
[2019-06-27T15:33:18,632][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7340] overhead, spent [853ms] collecting in the last [1s]
[2019-06-27T15:34:55,370][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7436] overhead, spent [779ms] collecting in the last [1.6s]
[2019-06-27T15:35:15,386][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7456] overhead, spent [742ms] collecting in the last [1s]
[2019-06-27T15:35:19,914][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7460] overhead, spent [734ms] collecting in the last [1.5s]
[2019-06-27T15:35:20,920][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7461] overhead, spent [955ms] collecting in the last [1s]
[2019-06-27T15:35:34,930][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7475] overhead, spent [775ms] collecting in the last [1s]
[2019-06-27T15:35:44,556][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7484] overhead, spent [698ms] collecting in the last [1.6s]
[2019-06-27T15:35:55,563][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7495] overhead, spent [719ms] collecting in the last [1s]
[2019-06-27T15:37:20,349][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7579] overhead, spent [777ms] collecting in the last [1.7s]
[2019-06-27T15:37:25,355][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7584] overhead, spent [712ms] collecting in the last [1s]
[2019-06-27T15:37:34,495][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7593] overhead, spent [712ms] collecting in the last [1.1s]
[2019-06-27T15:37:55,526][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7614] overhead, spent [708ms] collecting in the last [1s]
[2019-06-27T15:38:35,392][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7653] overhead, spent [968ms] collecting in the last [1.8s]
[2019-06-27T15:38:41,226][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7658] overhead, spent [1.5s] collecting in the last [1.8s]
[2019-06-27T15:39:35,266][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7712] overhead, spent [886ms] collecting in the last [1s]
[2019-06-27T15:39:36,266][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7713] overhead, spent [707ms] collecting in the last [1s]
[2019-06-27T15:40:06,055][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7742] overhead, spent [1.6s] collecting in the last [1.7s]
[2019-06-27T15:40:20,846][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7756] overhead, spent [1.5s] collecting in the last [1.7s]
[2019-06-27T15:40:25,555][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7760] overhead, spent [1.5s] collecting in the last [1.7s]
[2019-06-27T15:40:30,560][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7765] overhead, spent [689ms] collecting in the last [1s]
[2019-06-27T15:40:40,052][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7774] overhead, spent [688ms] collecting in the last [1.4s]
[2019-06-27T15:41:00,067][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7794] overhead, spent [718ms] collecting in the last [1s]
[2019-06-27T15:41:04,550][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7798] overhead, spent [683ms] collecting in the last [1.4s]
[2019-06-27T15:41:20,643][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7814] overhead, spent [956ms] collecting in the last [1s]
[2019-06-27T15:41:26,390][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7819] overhead, spent [1.5s] collecting in the last [1.7s]
[2019-06-27T15:41:49,591][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7842] overhead, spent [781ms] collecting in the last [1.1s]
[2019-06-27T15:42:25,620][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7878] overhead, spent [867ms] collecting in the last [1s]
[2019-06-27T15:42:39,629][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7892] overhead, spent [682ms] collecting in the last [1s]
[2019-06-27T15:42:40,630][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7893] overhead, spent [876ms] collecting in the last [1s]
[2019-06-27T15:42:50,264][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7902] overhead, spent [681ms] collecting in the last [1.6s]
[2019-06-27T15:42:55,267][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7907] overhead, spent [806ms] collecting in the last [1s]
[2019-06-27T15:43:20,100][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7931] overhead, spent [908ms] collecting in the last [1.8s]
[2019-06-27T15:43:55,152][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7966] overhead, spent [734ms] collecting in the last [1s]
[2019-06-27T15:44:19,782][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][7990] overhead, spent [690ms] collecting in the last [1.6s]
[2019-06-27T15:44:29,791][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8000] overhead, spent [685ms] collecting in the last [1s]
[2019-06-27T15:44:35,797][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8006] overhead, spent [692ms] collecting in the last [1s]
[2019-06-27T15:44:45,268][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8015] overhead, spent [694ms] collecting in the last [1.4s]
[2019-06-27T15:44:46,269][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8016] overhead, spent [752ms] collecting in the last [1s]
[2019-06-27T15:45:10,298][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8040] overhead, spent [688ms] collecting in the last [1s]
[2019-06-27T15:45:59,607][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8089] overhead, spent [713ms] collecting in the last [1.2s]
[2019-06-27T15:48:51,212][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8260] overhead, spent [833ms] collecting in the last [1.4s]
[2019-06-27T15:49:10,934][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8279] overhead, spent [1.4s] collecting in the last [1.7s]
[2019-06-27T15:49:14,936][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8283] overhead, spent [714ms] collecting in the last [1s]
[2019-06-27T15:49:15,936][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8284] overhead, spent [886ms] collecting in the last [1s]
[2019-06-27T15:49:20,634][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8288] overhead, spent [700ms] collecting in the last [1.6s]
[2019-06-27T15:50:05,662][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8333] overhead, spent [708ms] collecting in the last [1s]
[2019-06-27T15:50:25,211][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8352] overhead, spent [713ms] collecting in the last [1.5s]
[2019-06-27T15:51:05,236][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8392] overhead, spent [836ms] collecting in the last [1s]
[2019-06-27T15:51:15,242][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8402] overhead, spent [753ms] collecting in the last [1s]
[2019-06-27T15:51:40,255][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8427] overhead, spent [733ms] collecting in the last [1s]
[2019-06-27T15:53:00,079][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8506] overhead, spent [886ms] collecting in the last [1.7s]
[2019-06-27T15:53:05,085][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8511] overhead, spent [703ms] collecting in the last [1s]
[2019-06-27T15:55:52,308][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8678] overhead, spent [924ms] collecting in the last [1s]
[2019-06-27T15:56:45,349][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8731] overhead, spent [809ms] collecting in the last [1s]
[2019-06-27T15:56:55,029][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8740] overhead, spent [750ms] collecting in the last [1.6s]
[2019-06-27T15:57:05,034][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8750] overhead, spent [711ms] collecting in the last [1s]
[2019-06-27T15:57:10,922][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8755] overhead, spent [923ms] collecting in the last [1.8s]
[2019-06-27T15:58:40,055][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8844] overhead, spent [738ms] collecting in the last [1s]
[2019-06-27T16:00:40,712][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8964] overhead, spent [798ms] collecting in the last [1.5s]
[2019-06-27T16:00:41,762][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8965] overhead, spent [912ms] collecting in the last [1s]
[2019-06-27T16:00:45,767][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8969] overhead, spent [743ms] collecting in the last [1s]
[2019-06-27T16:00:50,771][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][8974] overhead, spent [744ms] collecting in the last [1s]
[2019-06-27T16:01:25,472][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9008] overhead, spent [761ms] collecting in the last [1.6s]
[2019-06-27T16:02:45,552][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9088] overhead, spent [700ms] collecting in the last [1s]
[2019-06-27T16:03:10,336][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9112] overhead, spent [883ms] collecting in the last [1.7s]
[2019-06-27T16:04:28,507][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9190] overhead, spent [714ms] collecting in the last [1s]
[2019-06-27T16:05:05,571][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9227] overhead, spent [722ms] collecting in the last [1s]
[2019-06-27T16:05:15,222][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9236] overhead, spent [767ms] collecting in the last [1.6s]
[2019-06-27T16:05:35,256][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9256] overhead, spent [796ms] collecting in the last [1s]
[2019-06-27T16:05:36,500][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9257] overhead, spent [721ms] collecting in the last [1.2s]
[2019-06-27T16:05:40,505][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9261] overhead, spent [824ms] collecting in the last [1s]
[2019-06-27T16:06:05,539][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9286] overhead, spent [836ms] collecting in the last [1s]
[2019-06-27T16:06:20,554][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9301] overhead, spent [705ms] collecting in the last [1s]
[2019-06-27T16:06:30,102][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9310] overhead, spent [781ms] collecting in the last [1.5s]
[2019-06-27T16:06:35,107][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9315] overhead, spent [808ms] collecting in the last [1s]
[2019-06-27T16:06:36,743][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9316] overhead, spent [700ms] collecting in the last [1.6s]
[2019-06-27T16:07:55,822][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9395] overhead, spent [726ms] collecting in the last [1s]
[2019-06-27T16:09:15,560][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9474] overhead, spent [727ms] collecting in the last [1.6s]
[2019-06-27T16:09:25,566][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9484] overhead, spent [701ms] collecting in the last [1s]
[2019-06-27T16:09:40,590][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9499] overhead, spent [863ms] collecting in the last [1s]
[2019-06-27T16:09:52,838][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9511] overhead, spent [824ms] collecting in the last [1.2s]
[2019-06-27T16:10:10,850][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9529] overhead, spent [798ms] collecting in the last [1s]
[2019-06-27T16:11:40,920][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9619] overhead, spent [700ms] collecting in the last [1s]
[2019-06-27T16:12:20,952][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9659] overhead, spent [692ms] collecting in the last [1s]
[2019-06-27T16:12:30,958][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9669] overhead, spent [688ms] collecting in the last [1s]
[2019-06-27T16:12:50,526][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9688] overhead, spent [692ms] collecting in the last [1.5s]
[2019-06-27T16:13:15,546][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9713] overhead, spent [906ms] collecting in the last [1s]
[2019-06-27T16:14:53,784][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9811] overhead, spent [762ms] collecting in the last [1.1s]
[2019-06-27T16:17:45,888][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][9983] overhead, spent [745ms] collecting in the last [1s]
[2019-06-27T16:18:10,098][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][10007] overhead, spent [711ms] collecting in the last [1.1s]
[2019-06-27T16:18:40,118][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][10037] overhead, spent [692ms] collecting in the last [1s]
[2019-06-27T16:18:41,119][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][10038] overhead, spent [787ms] collecting in the last [1s]
[2019-06-27T16:19:05,135][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][10062] overhead, spent [699ms] collecting in the last [1s]
[2019-06-27T16:19:16,164][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][10073] overhead, spent [979ms] collecting in the last [1s]
[2019-06-27T16:19:31,175][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][10088] overhead, spent [693ms] collecting in the last [1s]
[2019-06-27T16:19:32,176][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][10089] overhead, spent [800ms] collecting in the last [1s]
[2019-06-27T16:20:56,039][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][10172] overhead, spent [857ms] collecting in the last [1.7s]
[2019-06-27T16:21:06,044][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][10182] overhead, spent [727ms] collecting in the last [1s]
[2019-06-27T16:21:16,052][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][10192] overhead, spent [696ms] collecting in the last [1s]
[2019-06-27T16:22:20,700][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][10256] overhead, spent [699ms] collecting in the last [1.5s]
[2019-06-27T16:22:41,481][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][10276] overhead, spent [1.4s] collecting in the last [1.7s]
[2019-06-27T16:23:04,997][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][10299] overhead, spent [701ms] collecting in the last [1.5s]
[2019-06-27T16:23:06,999][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][10301] overhead, spent [692ms] collecting in the last [1s]
[2019-06-27T16:24:10,601][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][10364] overhead, spent [706ms] collecting in the last [1.5s]
[2019-06-27T16:25:54,090][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][10467] overhead, spent [754ms] collecting in the last [1.3s]
[2019-06-27T16:32:42,890][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][10874] overhead, spent [775ms] collecting in the last [1.5s]
[2019-06-27T16:40:15,454][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][11323] overhead, spent [806ms] collecting in the last [1s]
[2019-06-27T16:46:57,058][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][11721] overhead, spent [871ms] collecting in the last [1.4s]
[2019-06-27T16:53:52,798][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][12133] overhead, spent [687ms] collecting in the last [1.5s]
[2019-06-27T17:01:26,034][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][12582] overhead, spent [684ms] collecting in the last [1.5s]
[2019-06-27T17:08:26,986][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][12999] overhead, spent [688ms] collecting in the last [1s]
[2019-06-27T17:15:16,092][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13405] overhead, spent [683ms] collecting in the last [1s]
[2019-06-27T17:20:19,641][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T17:20:20,438][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13707] overhead, spent [719ms] collecting in the last [1.3s]
[2019-06-27T17:20:25,454][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13712] overhead, spent [685ms] collecting in the last [1s]
[2019-06-27T17:20:55,113][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13741] overhead, spent [677ms] collecting in the last [1.5s]
[2019-06-27T17:21:05,146][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13751] overhead, spent [823ms] collecting in the last [1s]
[2019-06-27T17:21:15,193][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13761] overhead, spent [701ms] collecting in the last [1s]
[2019-06-27T17:21:28,492][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13774] overhead, spent [1s] collecting in the last [1.2s]
[2019-06-27T17:21:35,524][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13781] overhead, spent [682ms] collecting in the last [1s]
[2019-06-27T17:22:29,358][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13834] overhead, spent [850ms] collecting in the last [1.7s]
[2019-06-27T17:22:45,406][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13850] overhead, spent [832ms] collecting in the last [1s]
[2019-06-27T17:22:50,438][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13855] overhead, spent [831ms] collecting in the last [1s]
[2019-06-27T17:23:05,268][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13869] overhead, spent [790ms] collecting in the last [1.7s]
[2019-06-27T17:23:06,284][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13870] overhead, spent [901ms] collecting in the last [1s]
[2019-06-27T17:23:14,456][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T17:23:16,378][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13880] overhead, spent [671ms] collecting in the last [1s]
[2019-06-27T17:23:29,974][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13893] overhead, spent [721ms] collecting in the last [1.5s]
[2019-06-27T17:23:36,006][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13899] overhead, spent [912ms] collecting in the last [1s]
[2019-06-27T17:23:50,070][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13913] overhead, spent [699ms] collecting in the last [1s]
[2019-06-27T17:23:56,117][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13919] overhead, spent [798ms] collecting in the last [1s]
[2019-06-27T17:24:05,149][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13928] overhead, spent [672ms] collecting in the last [1s]
[2019-06-27T17:24:06,165][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13929] overhead, spent [769ms] collecting in the last [1s]
[2019-06-27T17:24:15,822][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13938] overhead, spent [693ms] collecting in the last [1.6s]
[2019-06-27T17:24:20,885][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13943] overhead, spent [685ms] collecting in the last [1s]
[2019-06-27T17:24:25,480][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13947] overhead, spent [744ms] collecting in the last [1.5s]
[2019-06-27T17:24:40,560][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13962] overhead, spent [730ms] collecting in the last [1s]
[2019-06-27T17:24:50,589][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13972] overhead, spent [721ms] collecting in the last [1s]
[2019-06-27T17:25:00,185][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13981] overhead, spent [673ms] collecting in the last [1.5s]
[2019-06-27T17:25:11,233][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][13992] overhead, spent [895ms] collecting in the last [1s]
[2019-06-27T17:27:35,561][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14135] overhead, spent [700ms] collecting in the last [1.2s]
[2019-06-27T17:28:49,491][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T17:28:55,195][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14214] overhead, spent [684ms] collecting in the last [1.1s]
[2019-06-27T17:29:01,211][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14220] overhead, spent [724ms] collecting in the last [1s]
[2019-06-27T17:29:11,259][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14230] overhead, spent [702ms] collecting in the last [1s]
[2019-06-27T17:29:20,932][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14239] overhead, spent [682ms] collecting in the last [1.6s]
[2019-06-27T17:29:25,964][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14244] overhead, spent [654ms] collecting in the last [1s]
[2019-06-27T17:29:45,669][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14263] overhead, spent [847ms] collecting in the last [1.6s]
[2019-06-27T17:29:51,701][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14269] overhead, spent [800ms] collecting in the last [1s]
[2019-06-27T17:30:05,734][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14283] overhead, spent [681ms] collecting in the last [1s]
[2019-06-27T17:30:16,376][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14293] overhead, spent [683ms] collecting in the last [1.6s]
[2019-06-27T17:30:26,408][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14303] overhead, spent [669ms] collecting in the last [1s]
[2019-06-27T17:32:21,342][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14417] overhead, spent [692ms] collecting in the last [1s]
[2019-06-27T17:32:41,454][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14437] overhead, spent [706ms] collecting in the last [1s]
[2019-06-27T17:32:46,142][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14441] overhead, spent [729ms] collecting in the last [1.6s]
[2019-06-27T17:32:51,185][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14446] overhead, spent [813ms] collecting in the last [1s]
[2019-06-27T17:33:50,096][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T17:34:01,409][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14516] overhead, spent [810ms] collecting in the last [1s]
[2019-06-27T17:34:05,972][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14520] overhead, spent [694ms] collecting in the last [1.5s]
[2019-06-27T17:34:41,117][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14555] overhead, spent [1s] collecting in the last [1s]
[2019-06-27T17:34:46,149][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14560] overhead, spent [815ms] collecting in the last [1s]
[2019-06-27T17:34:56,181][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14570] overhead, spent [757ms] collecting in the last [1s]
[2019-06-27T17:35:06,745][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14580] overhead, spent [709ms] collecting in the last [1.5s]
[2019-06-27T17:35:26,762][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14600] overhead, spent [688ms] collecting in the last [1s]
[2019-06-27T17:35:31,357][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14604] overhead, spent [691ms] collecting in the last [1.5s]
[2019-06-27T17:35:36,373][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14609] overhead, spent [706ms] collecting in the last [1s]
[2019-06-27T17:35:45,999][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14618] overhead, spent [684ms] collecting in the last [1.6s]
[2019-06-27T17:35:52,015][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14624] overhead, spent [810ms] collecting in the last [1s]
[2019-06-27T17:36:06,048][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14638] overhead, spent [696ms] collecting in the last [1s]
[2019-06-27T17:36:25,097][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14657] overhead, spent [662ms] collecting in the last [1s]
[2019-06-27T17:36:47,130][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14679] overhead, spent [812ms] collecting in the last [1s]
[2019-06-27T17:37:11,977][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14703] overhead, spent [766ms] collecting in the last [1.7s]
[2019-06-27T17:37:27,072][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14718] overhead, spent [706ms] collecting in the last [1s]
[2019-06-27T17:37:41,621][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14732] overhead, spent [669ms] collecting in the last [1.5s]
[2019-06-27T17:38:22,297][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14772] overhead, spent [1.4s] collecting in the last [1.5s]
[2019-06-27T17:38:27,360][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14777] overhead, spent [665ms] collecting in the last [1s]
[2019-06-27T17:38:32,392][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14782] overhead, spent [664ms] collecting in the last [1s]
[2019-06-27T17:38:41,908][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14791] overhead, spent [701ms] collecting in the last [1.5s]
[2019-06-27T17:41:27,486][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14956] overhead, spent [694ms] collecting in the last [1s]
[2019-06-27T17:41:42,191][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14970] overhead, spent [678ms] collecting in the last [1.6s]
[2019-06-27T17:41:52,208][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14980] overhead, spent [794ms] collecting in the last [1s]
[2019-06-27T17:42:03,255][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][14991] overhead, spent [673ms] collecting in the last [1s]
[2019-06-27T17:42:52,424][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15040] overhead, spent [689ms] collecting in the last [1s]
[2019-06-27T17:44:42,389][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15149] overhead, spent [697ms] collecting in the last [1.6s]
[2019-06-27T17:45:02,468][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15169] overhead, spent [721ms] collecting in the last [1s]
[2019-06-27T17:45:07,484][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15174] overhead, spent [693ms] collecting in the last [1s]
[2019-06-27T17:45:37,783][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15204] overhead, spent [907ms] collecting in the last [1.1s]
[2019-06-27T17:46:02,874][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15229] overhead, spent [716ms] collecting in the last [1s]
[2019-06-27T17:46:32,648][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15258] overhead, spent [690ms] collecting in the last [1.6s]
[2019-06-27T17:46:38,414][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15263] overhead, spent [1.5s] collecting in the last [1.7s]
[2019-06-27T17:46:42,430][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15267] overhead, spent [692ms] collecting in the last [1s]
[2019-06-27T17:46:51,836][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15276] overhead, spent [686ms] collecting in the last [1.3s]
[2019-06-27T17:46:57,867][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15282] overhead, spent [912ms] collecting in the last [1s]
[2019-06-27T17:47:07,884][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15292] overhead, spent [680ms] collecting in the last [1s]
[2019-06-27T17:47:12,447][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15296] overhead, spent [684ms] collecting in the last [1.5s]
[2019-06-27T17:47:17,479][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15301] overhead, spent [690ms] collecting in the last [1s]
[2019-06-27T17:47:32,090][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15315] overhead, spent [714ms] collecting in the last [1.5s]
[2019-06-27T17:47:43,216][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15326] overhead, spent [952ms] collecting in the last [1s]
[2019-06-27T17:47:54,577][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T17:47:54,577][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T17:47:56,327][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15339] overhead, spent [935ms] collecting in the last [1s]
[2019-06-27T17:47:59,155][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15341] overhead, spent [1.6s] collecting in the last [1.8s]
[2019-06-27T17:48:04,709][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T17:48:05,225][INFO ][o.e.c.m.MetaDataMappingService] [node_1] [d_march_2019/Q0nnCtW9RZKc__ucTUfnNA] update_mapping [_doc]
[2019-06-27T17:48:07,057][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15348] overhead, spent [1.6s] collecting in the last [1.8s]
[2019-06-27T17:48:08,071][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15349] overhead, spent [717ms] collecting in the last [1s]
[2019-06-27T17:48:19,899][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15360] overhead, spent [1.5s] collecting in the last [1.7s]
[2019-06-27T17:48:24,930][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15365] overhead, spent [682ms] collecting in the last [1s]
[2019-06-27T17:48:29,431][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15369] overhead, spent [675ms] collecting in the last [1.4s]
[2019-06-27T17:48:46,313][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15385] overhead, spent [1.6s] collecting in the last [1.7s]
[2019-06-27T17:48:49,159][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15387] overhead, spent [1.6s] collecting in the last [1.8s]
[2019-06-27T17:49:55,011][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15452] overhead, spent [805ms] collecting in the last [1.5s]
[2019-06-27T17:50:21,322][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15478] overhead, spent [792ms] collecting in the last [1.2s]
[2019-06-27T17:50:23,324][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15480] overhead, spent [768ms] collecting in the last [1s]
[2019-06-27T17:50:29,222][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15485] overhead, spent [1.7s] collecting in the last [1.8s]
[2019-06-27T17:50:43,371][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15499] overhead, spent [1s] collecting in the last [1.1s]
[2019-06-27T17:50:45,275][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15500] overhead, spent [1.7s] collecting in the last [1.9s]
[2019-06-27T17:50:48,433][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15503] overhead, spent [1s] collecting in the last [1.1s]
[2019-06-27T17:50:53,438][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15508] overhead, spent [772ms] collecting in the last [1s]
[2019-06-27T17:51:03,146][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15517] overhead, spent [766ms] collecting in the last [1.7s]
[2019-06-27T17:51:04,220][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15518] overhead, spent [990ms] collecting in the last [1s]
[2019-06-27T17:51:08,227][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15522] overhead, spent [769ms] collecting in the last [1s]
[2019-06-27T17:51:48,094][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15561] overhead, spent [778ms] collecting in the last [1.5s]
[2019-06-27T17:51:50,126][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15563] overhead, spent [735ms] collecting in the last [1s]
[2019-06-27T17:51:51,860][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15564] overhead, spent [788ms] collecting in the last [1.7s]
[2019-06-27T17:51:55,876][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15568] overhead, spent [756ms] collecting in the last [1s]
[2019-06-27T17:51:56,892][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15569] overhead, spent [925ms] collecting in the last [1s]
[2019-06-27T17:53:07,040][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15638] overhead, spent [683ms] collecting in the last [1.6s]
[2019-06-27T17:53:08,525][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15639] overhead, spent [732ms] collecting in the last [1.4s]
[2019-06-27T17:53:09,541][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15640] overhead, spent [751ms] collecting in the last [1s]
[2019-06-27T17:53:11,556][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15642] overhead, spent [699ms] collecting in the last [1s]
[2019-06-27T17:53:14,119][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15644] overhead, spent [719ms] collecting in the last [1.5s]
[2019-06-27T17:53:15,135][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15645] overhead, spent [919ms] collecting in the last [1s]
[2019-06-27T17:53:20,182][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15650] overhead, spent [687ms] collecting in the last [1s]
[2019-06-27T17:53:53,827][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15683] overhead, spent [736ms] collecting in the last [1.5s]
[2019-06-27T17:53:59,858][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15689] overhead, spent [913ms] collecting in the last [1s]
[2019-06-27T17:54:00,874][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15690] overhead, spent [680ms] collecting in the last [1s]
[2019-06-27T17:54:04,906][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15694] overhead, spent [701ms] collecting in the last [1s]
[2019-06-27T17:54:09,485][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15698] overhead, spent [708ms] collecting in the last [1.5s]
[2019-06-27T17:54:14,516][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15703] overhead, spent [705ms] collecting in the last [1s]
[2019-06-27T17:54:19,548][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15708] overhead, spent [686ms] collecting in the last [1s]
[2019-06-27T17:54:29,143][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15717] overhead, spent [688ms] collecting in the last [1.5s]
[2019-06-27T17:54:30,143][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15718] overhead, spent [935ms] collecting in the last [1s]
[2019-06-27T17:54:31,159][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15719] overhead, spent [695ms] collecting in the last [1s]
[2019-06-27T17:54:39,738][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15727] overhead, spent [700ms] collecting in the last [1.5s]
[2019-06-27T17:54:54,771][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15742] overhead, spent [675ms] collecting in the last [1s]
[2019-06-27T17:55:04,412][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15751] overhead, spent [665ms] collecting in the last [1.6s]
[2019-06-27T17:55:19,461][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15766] overhead, spent [736ms] collecting in the last [1s]
[2019-06-27T17:55:29,993][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15776] overhead, spent [1.4s] collecting in the last [1.4s]
[2019-06-27T17:55:34,134][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15780] overhead, spent [729ms] collecting in the last [1.1s]
[2019-06-27T17:55:49,183][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15795] overhead, spent [708ms] collecting in the last [1s]
[2019-06-27T17:55:54,215][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15800] overhead, spent [704ms] collecting in the last [1s]
[2019-06-27T17:55:55,230][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15801] overhead, spent [827ms] collecting in the last [1s]
[2019-06-27T17:55:59,950][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15805] overhead, spent [726ms] collecting in the last [1.7s]
[2019-06-27T17:56:04,966][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15810] overhead, spent [810ms] collecting in the last [1s]
[2019-06-27T17:56:15,014][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15820] overhead, spent [684ms] collecting in the last [1s]
[2019-06-27T17:56:34,719][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15839] overhead, spent [737ms] collecting in the last [1.6s]
[2019-06-27T17:56:55,812][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15859] overhead, spent [1.7s] collecting in the last [2s]
[2019-06-27T17:56:59,828][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15863] overhead, spent [736ms] collecting in the last [1s]
[2019-06-27T17:57:04,453][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15867] overhead, spent [719ms] collecting in the last [1.6s]
[2019-06-27T17:57:09,516][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15872] overhead, spent [745ms] collecting in the last [1s]
[2019-06-27T17:57:19,580][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15882] overhead, spent [748ms] collecting in the last [1s]
[2019-06-27T17:57:34,674][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15897] overhead, spent [711ms] collecting in the last [1s]
[2019-06-27T17:57:35,690][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15898] overhead, spent [861ms] collecting in the last [1s]
[2019-06-27T17:57:52,442][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15914] overhead, spent [703ms] collecting in the last [1s]
[2019-06-27T17:57:59,677][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15921] overhead, spent [838ms] collecting in the last [1.2s]
[2019-06-27T17:58:09,725][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15931] overhead, spent [714ms] collecting in the last [1s]
[2019-06-27T17:58:25,445][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15946] overhead, spent [1.6s] collecting in the last [1.6s]
[2019-06-27T17:58:29,727][WARN ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15950] overhead, spent [685ms] collecting in the last [1.2s]
[2019-06-27T17:58:34,696][INFO ][o.e.m.j.JvmGcMonitorService] [node_1] [gc][15951] overhead, spent [2.2s] collecting in the last [4.9s]
[2019-06-27T17:58:34,696][ERROR][o.e.ExceptionsHelper     ] [node_1] fatal error
    at org.elasticsearch.ExceptionsHelper.lambda$maybeDieOnAnotherThread$2(ExceptionsHelper.java:280)
    at java.util.Optional.ifPresent(Unknown Source)
    at org.elasticsearch.ExceptionsHelper.maybeDieOnAnotherThread(ExceptionsHelper.java:270)
    at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.exceptionCaught(Netty4HttpRequestHandler.java:176)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.ChannelHandlerAdapter.exceptionCaught(ChannelHandlerAdapter.java:87)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1401)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:953)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:125)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:174)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:556)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:510)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
    at java.lang.Thread.run(Unknown Source)
[2019-06-27T17:58:34,696][ERROR][o.e.ExceptionsHelper     ] [node_1] fatal error
    at org.elasticsearch.ExceptionsHelper.lambda$maybeDieOnAnotherThread$2(ExceptionsHelper.java:280)
    at java.util.Optional.ifPresent(Unknown Source)
    at org.elasticsearch.ExceptionsHelper.maybeDieOnAnotherThread(ExceptionsHelper.java:270)
    at org.elasticsearch.xpack.security.transport.netty4.SecurityNetty4HttpServerTransport.exceptionCaught(SecurityNetty4HttpServerTransport.java:62)
    at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.exceptionCaught(Netty4HttpRequestHandler.java:177)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.ChannelHandlerAdapter.exceptionCaught(ChannelHandlerAdapter.java:87)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.ChannelInboundHandlerAdapter.exceptionCaught(ChannelInboundHandlerAdapter.java:131)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.AbstractChannelHandlerContext.fireExceptionCaught(AbstractChannelHandlerContext.java:256)
    at io.netty.channel.DefaultChannelPipeline$HeadContext.exceptionCaught(DefaultChannelPipeline.java:1401)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
    at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:264)
    at io.netty.channel.DefaultChannelPipeline.fireExceptionCaught(DefaultChannelPipeline.java:953)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.handleReadException(AbstractNioByteChannel.java:125)
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:174)
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:556)
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:510)
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470)
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909)
    at java.lang.Thread.run(Unknown Source)
[2019-06-27T17:58:34,727][WARN ][o.e.h.n.Netty4HttpServerTransport] [node_1] caught exception while handling client http traffic, closing connection [id: 0xc925a53d, L:/127.0.0.1:9200 - R:/127.0.0.1:50490]
java.lang.OutOfMemoryError: Java heap space
    at io.netty.util.internal.PlatformDependent.allocateUninitializedArray(PlatformDependent.java:204) ~[netty-common-4.1.32.Final.jar:4.1.32.Final]
    at io.netty.buffer.PoolArena$HeapArena.newByteArray(PoolArena.java:676) ~[netty-buffer-4.1.32.Final.jar:4.1.32.Final]
    at io.netty.buffer.PoolArena$HeapArena.newChunk(PoolArena.java:686) ~[netty-buffer-4.1.32.Final.jar:4.1.32.Final]
    at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244) ~[netty-buffer-4.1.32.Final.jar:4.1.32.Final]
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:226) ~[netty-buffer-4.1.32.Final.jar:4.1.32.Final]
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:146) ~[netty-buffer-4.1.32.Final.jar:4.1.32.Final]
    at io.netty.buffer.PooledByteBufAllocator.newHeapBuffer(PooledByteBufAllocator.java:307) ~[netty-buffer-4.1.32.Final.jar:4.1.32.Final]
    at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:166) ~[netty-buffer-4.1.32.Final.jar:4.1.32.Final]
    at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:157) ~[netty-buffer-4.1.32.Final.jar:4.1.32.Final]
    at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:139) ~[netty-buffer-4.1.32.Final.jar:4.1.32.Final]
    at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) ~[netty-transport-4.1.32.Final.jar:4.1.32.Final]
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:147) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:556) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:510) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470) [netty-transport-4.1.32.Final.jar:4.1.32.Final]
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909) [netty-common-4.1.32.Final.jar:4.1.32.Final]
    at java.lang.Thread.run(Unknown Source) [?:1.8.0_171]
[2019-06-27T17:58:34,696][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [node_1] fatal error in thread [Thread-151], exiting
java.lang.OutOfMemoryError: Java heap space
    at io.netty.util.internal.PlatformDependent.allocateUninitializedArray(PlatformDependent.java:204) ~[?:?]
    at io.netty.buffer.PoolArena$HeapArena.newByteArray(PoolArena.java:676) ~[?:?]
    at io.netty.buffer.PoolArena$HeapArena.newChunk(PoolArena.java:686) ~[?:?]
    at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244) ~[?:?]
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:226) ~[?:?]
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:146) ~[?:?]
    at io.netty.buffer.PooledByteBufAllocator.newHeapBuffer(PooledByteBufAllocator.java:307) ~[?:?]
    at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:166) ~[?:?]
    at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:157) ~[?:?]
    at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:139) ~[?:?]
    at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) ~[?:?]
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:147) ~[?:?]
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656) ~[?:?]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:556) ~[?:?]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:510) ~[?:?]
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470) ~[?:?]
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909) ~[?:?]
    at java.lang.Thread.run(Unknown Source) [?:1.8.0_171]
[2019-06-27T17:58:34,696][ERROR][o.e.b.ElasticsearchUncaughtExceptionHandler] [node_1] fatal error in thread [Thread-150], exiting
java.lang.OutOfMemoryError: Java heap space
    at io.netty.util.internal.PlatformDependent.allocateUninitializedArray(PlatformDependent.java:204) ~[?:?]
    at io.netty.buffer.PoolArena$HeapArena.newByteArray(PoolArena.java:676) ~[?:?]
    at io.netty.buffer.PoolArena$HeapArena.newChunk(PoolArena.java:686) ~[?:?]
    at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244) ~[?:?]
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:226) ~[?:?]
    at io.netty.buffer.PoolArena.allocate(PoolArena.java:146) ~[?:?]
    at io.netty.buffer.PooledByteBufAllocator.newHeapBuffer(PooledByteBufAllocator.java:307) ~[?:?]
    at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:166) ~[?:?]
    at io.netty.buffer.AbstractByteBufAllocator.heapBuffer(AbstractByteBufAllocator.java:157) ~[?:?]
    at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:139) ~[?:?]
    at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114) ~[?:?]
    at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:147) ~[?:?]
    at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:656) ~[?:?]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:556) ~[?:?]
    at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:510) ~[?:?]
    at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:470) ~[?:?]
    at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:909) ~[?:?]
    at java.lang.Thread.run(Unknown Source) [?:1.8.0_171]
dadoonet commented 5 years ago

You don't have enough memory allocated to elasticsearch. Change the heap size to more than the default 1gb.

BTW could you use the formatting button anytime you are posting logs, code... because I have to manually edit it everytime. Thanks

Neel-Gagan commented 5 years ago

i have allocated 10 GB in heap size. xms10g xmx10g

dadoonet commented 5 years ago

Nope.

[2019-06-27T13:29:25,055][INFO ][o.e.e.NodeEnvironment    ] [node_1] heap size [990.7mb], compressed ordinary object pointers [true]
Neel-Gagan commented 5 years ago

is there anywhere else i need to make changes to change the heap size, i have done the changes in jvm.options file under elasticsearch/config directory as

Xms represents the initial size of total heap space

Xmx represents the maximum size of total heap space

-Xms10g -Xmx10g

need i do change somewhere else also ?

dadoonet commented 5 years ago

Could you format the logs/code with markdown or the code icon? That would help as I have to do that every time.

The modification looks good. Did you stop and restart?

Neel-Gagan commented 5 years ago

i have allocated 10 GB to heap and its showing in elasticsearch logs. is there any ratio to which heap is to be allocated ? i have 24 GB of RAM memory and allocated 10 GB.

dadoonet commented 5 years ago

Half of the ram up to 50% is ok. 12gb in your case. Do you have the same problem with 10gb heap? If so please share again the full elasticsearch logs.

Neel-Gagan commented 5 years ago

i am getting the below error while running the crawler with heap size 10 GB, no error in ES logs

Microsoft Windows [Version 10.0.16299.214]
(c) 2017 Microsoft Corporation. All rights reserved.

C:\ELK\fscrawler-2.6\bin>fscrawler  d_march_2019 --debug --loop 1 > march_2019.txt
Exception in thread "fs-crawler" java.lang.OutOfMemoryError: Java heap space
        at java.util.Arrays.copyOf(Unknown Source)
        at java.lang.AbstractStringBuilder.ensureCapacityInternal(Unknown Source)
        at java.lang.AbstractStringBuilder.append(Unknown Source)
        at java.lang.StringBuilder.append(Unknown Source)
        at org.apache.logging.log4j.core.pattern.LineSeparatorPatternConverter.format(LineSeparatorPatternConverter.java:66)
        at org.apache.logging.log4j.core.pattern.PatternFormatter.format(PatternFormatter.java:38)
        at org.apache.logging.log4j.core.layout.PatternLayout$PatternSerializer.toSerializable(PatternLayout.java:334)
        at org.apache.logging.log4j.core.layout.PatternLayout.toText(PatternLayout.java:233)
        at org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:218)
        at org.apache.logging.log4j.core.layout.PatternLayout.encode(PatternLayout.java:58)
        at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.directEncodeEvent(AbstractOutputStreamAppender.java:177)
        at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.tryAppend(AbstractOutputStreamAppender.java:170)
        at org.apache.logging.log4j.core.appender.AbstractOutputStreamAppender.append(AbstractOutputStreamAppender.java:161)
        at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:156)
        at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:129)
        at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:120)
        at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:84)
        at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:464)
        at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:448)
        at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:431)
        at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:406)
        at org.apache.logging.log4j.core.config.AwaitCompletionReliabilityStrategy.log(AwaitCompletionReliabilityStrategy.java:63)
        at org.apache.logging.log4j.core.Logger.logMessage(Logger.java:146)
        at org.apache.logging.log4j.spi.AbstractLogger.tryLogMessage(AbstractLogger.java:2170)
        at org.apache.logging.log4j.spi.AbstractLogger.logMessageTrackRecursion(AbstractLogger.java:2125)
        at org.apache.logging.log4j.spi.AbstractLogger.logMessageSafely(AbstractLogger.java:2108)
        at org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:2025)
        at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:1898)
        at org.apache.logging.log4j.spi.AbstractLogger.debug(AbstractLogger.java:449)
        at fr.pilato.elasticsearch.crawler.fs.framework.FsCrawlerUtil.isIndexable(FsCrawlerUtil.java:259)
        at fr.pilato.elasticsearch.crawler.fs.FsParserAbstract.indexFile(FsParserAbstract.java:486)
        at fr.pilato.elasticsearch.crawler.fs.FsParserAbstract.addFilesRecursively(FsParserAbstract.java:275)
dadoonet commented 5 years ago

Did you define the jvm settings for FSCrawler? https://fscrawler.readthedocs.io/en/latest/admin/jvm-settings.html

As you are running on Windows, I assume you need to go to the control panel and add FS_JAVA_OPTS as a user or system option with -Xmx2048m -Xms2048m or even more.

Then restart the command line and start again FSCrawler. I'm going to add a way to report the current HEAP size at startup as this will help me debugging in the future.

Could run a printenv command to make sure this setting is visible before it starts FSCrawler?

Neel-Gagan commented 5 years ago

on doing the heap changes, the indexing went fine, but what i am facing now is when i search anything on that index, kibana gets hanged and unresponsive.

initially i get the below error while running kibana after doing the heap changes.

C:\ELK\kibana-6.8.0\bin>kibana.bat
  log   [10:17:26.461] [info][status][plugin:kibana@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.487] [info][status][plugin:elasticsearch@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.489] [info][status][plugin:xpack_main@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.493] [info][status][plugin:graph@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.501] [info][status][plugin:monitoring@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.504] [info][status][plugin:spaces@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.512] [warning][security] Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml
  log   [10:17:26.515] [warning][security] Session cookies will be transmitted over insecure connections. This is not recommended.
  log   [10:17:26.520] [info][status][plugin:security@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.531] [info][status][plugin:searchprofiler@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.534] [info][status][plugin:ml@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.560] [info][status][plugin:tilemap@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.564] [info][status][plugin:watcher@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.573] [info][status][plugin:grokdebugger@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.576] [info][status][plugin:dashboard_mode@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.578] [info][status][plugin:logstash@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.583] [info][status][plugin:beats_management@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.601] [info][status][plugin:apm@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.603] [info][status][plugin:tile_map@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.606] [info][status][plugin:task_manager@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.608] [info][status][plugin:maps@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.611] [info][status][plugin:interpreter@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.619] [info][status][plugin:canvas@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.622] [info][status][plugin:license_management@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.624] [info][status][plugin:cloud@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.628] [info][status][plugin:index_management@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.639] [info][status][plugin:console@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.641] [info][status][plugin:console_extensions@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.643] [info][status][plugin:notifications@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.645] [info][status][plugin:index_lifecycle_management@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.674] [info][status][plugin:infra@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.676] [info][status][plugin:rollup@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.683] [info][status][plugin:remote_clusters@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.686] [info][status][plugin:cross_cluster_replication@6.8.0] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [10:17:26.694] [info][status][plugin:translations@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.705] [info][status][plugin:upgrade_assistant@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.714] [info][status][plugin:uptime@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.716] [info][status][plugin:oss_telemetry@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.721] [info][status][plugin:metrics@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:26.842] [info][status][plugin:timelion@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:27.614] [info][status][plugin:elasticsearch@6.8.0] Status changed from yellow to green - Ready
  log   [10:17:27.651] [info][license][xpack] Imported license information from Elasticsearch for the [data] cluster: mode: basic | status: active
  log   [10:17:27.655] [info][status][plugin:xpack_main@6.8.0] Status changed from yellow to green - Ready
  log   [10:17:27.655] [info][status][plugin:graph@6.8.0] Status changed from yellow to green - Ready
  log   [10:17:27.657] [info][status][plugin:searchprofiler@6.8.0] Status changed from yellow to green - Ready
  log   [10:17:27.657] [info][status][plugin:ml@6.8.0] Status changed from yellow to green - Ready
  log   [10:17:27.658] [info][status][plugin:tilemap@6.8.0] Status changed from yellow to green - Ready
  log   [10:17:27.659] [info][status][plugin:watcher@6.8.0] Status changed from yellow to green - Ready
  log   [10:17:27.659] [info][status][plugin:grokdebugger@6.8.0] Status changed from yellow to green - Ready
  log   [10:17:27.660] [info][status][plugin:logstash@6.8.0] Status changed from yellow to green - Ready
  log   [10:17:27.661] [info][status][plugin:beats_management@6.8.0] Status changed from yellow to green - Ready
  log   [10:17:27.661] [info][status][plugin:index_management@6.8.0] Status changed from yellow to green - Ready
  log   [10:17:27.662] [info][status][plugin:index_lifecycle_management@6.8.0] Status changed from yellow to green - Ready
  log   [10:17:27.663] [info][status][plugin:rollup@6.8.0] Status changed from yellow to green - Ready
  log   [10:17:27.664] [info][status][plugin:remote_clusters@6.8.0] Status changed from yellow to green - Ready
  log   [10:17:27.664] [info][status][plugin:cross_cluster_replication@6.8.0] Status changed from yellow to green - Ready
  log   [10:17:27.665] [info][kibana-monitoring][monitoring-ui] Starting monitoring stats collection
  log   [10:17:27.675] [info][status][plugin:security@6.8.0] Status changed from yellow to green - Ready
  log   [10:17:27.676] [info][status][plugin:maps@6.8.0] Status changed from yellow to green - Ready
  log   [10:17:27.712] [info][license][xpack] Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic | status: active
  log   [10:17:27.976] [warning][reporting] Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml
  log   [10:17:28.149] [info][status][plugin:reporting@6.8.0] Status changed from uninitialized to green - Ready
  log   [10:17:29.005] [info][listening] Server running at http://localhost:5601
  log   [10:17:29.021] [info][status][plugin:spaces@6.8.0] Status changed from yellow to green - Ready

<--- Last few GCs --->

[11568:000001A64C885610]    51424 ms: Mark-sweep 866.3 (1052.1) -> 866.2 (1013.1) MB, 64.3 / 0.0 ms  (average mu = 0.898, current mu = 0.000) last resort GC in old space requested
[11568:000001A64C885610]    51489 ms: Mark-sweep 866.2 (1013.1) -> 866.2 (1001.6) MB, 65.0 / 0.0 ms  (average mu = 0.813, current mu = 0.000) last resort GC in old space requested

<--- JS stacktrace --->

==== JS stack trace =========================================

Security context: 0x00979419e6e1 <JSObject>
    0: builtin exit frame: parse(this=0x009794191a19 <Object map = 000002AB5F7042A9>,0x00e2b486d0a9 <Very long string[394376133]>,0x009794191a19 <Object map = 000002AB5F7042A9>)

    1: deserialize [0000018817378B71] [C:\ELK\kibana-6.8.0\node_modules\elasticsearch\src\lib\serializers\json.js:45] [bytecode=0000027D75BB8A71 offset=21](this=0x00034ed99eb1 <Json map = 000001C246C9BB11>,str=0x0...

FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
 1: 00007FF7A9CA0EFA v8::internal::GCIdleTimeHandler::GCIdleTimeHandler+4810
 2: 00007FF7A9C7A296 node::MakeCallback+4518
 3: 00007FF7A9C7AC80 node_module_register+2160
 4: 00007FF7A9F109BE v8::internal::FatalProcessOutOfMemory+846
 5: 00007FF7A9F108EF v8::internal::FatalProcessOutOfMemory+639
 6: 00007FF7AA44E954 v8::internal::Heap::MaxHeapGrowingFactor+11476
 7: 00007FF7AA44C6E8 v8::internal::Heap::MaxHeapGrowingFactor+2664
 8: 00007FF7A9FEF0CB v8::internal::Factory::AllocateRawWithImmortalMap+59
 9: 00007FF7A9FF1BCD v8::internal::Factory::NewRawTwoByteString+77
10: 00007FF7AA205C58 v8::internal::Smi::SmiPrint+536
11: 00007FF7A9F03ECB v8::internal::StringHasher::UpdateIndex+219
12: 00007FF7AA0DEFD9 v8::internal::CodeStubAssembler::ConstexprBoolNot+38457
13: 00007FF7AA0DEF4B v8::internal::CodeStubAssembler::ConstexprBoolNot+38315
14: 0000012B8125C721

C:\ELK\kibana-6.8.0\bin>kibana.bat --max_old_space_size=4096
  log   [10:21:21.633] [fatal][root] { Error: "max_old_space_size" setting was not applied. Check for spelling errors and ensure that expected plugins are installed.
    at KbnServer.exports.default (F:\ELK SETUP\kibana-6.8.0\src\server\config\complete.js:88:17) code: 'InvalidConfig', processExitCode: 64 }

 FATAL  Error: "max_old_space_size" setting was not applied. Check for spelling errors and ensure that expected plugins are installed.

max_old_space_size had now been applied on kibana.bat and kibana working fine, but still this issue of kibana becoming unresponsive is there.

dadoonet commented 5 years ago

Please could you format the logs/code with markdown or the code icon? That would help as I have to do that every time.

dadoonet commented 5 years ago

You are probably indexing a lot of text which makes your documents very very big. The problem is that Kibana is by default loading much more than 10 documents by default.

In 7.x, discover:sampleSize is 500. See http://0.0.0.0:5601/app/kibana#/management/kibana/settings/?_g=():

image

Which means that you might load 500 of very big documents everytime. You should try to run:

GET /d_march_2019/_search?size=500

And see how big the response. That might explain. May be you need to allocate more memory to Kibana then?

Neel-Gagan commented 5 years ago

By allocating more memory you mean increasing the heap memory allocation ? i am using the <> icon to paste the logs. is it not the markdown for pasting of logs ?

dadoonet commented 5 years ago

By allocating more memory you mean increasing the heap memory allocation ?

The memory Kibana can use. Although it's not documented I think you can give more memory to NodeJS may be:

NODE_OPTIONS="--max-old-space-size=2560"
Neel-Gagan commented 5 years ago

i have changed NODE_OPTIONS="--max-old-space-size=8192" in kibana.bat file. doing this i am getting the below error on opening kibana in browser. `allocation size overflow (http://1xx.xxx.xx.xx:5601/built_assets/dlls/vendors.bundle.dll.js:450)

Version: 6.8.0 Build: 20352 Error: allocation size overflow (http://1xx.xxx.xx.xx:5601/built_assets/dlls/vendors.bundle.dll.js:450) window.onerror@http://1xx.xxx.xx.xx:5601/bundles/commons.bundle.js:3:971195 ` when i check performance in task manager the disk utilisation shows around 93%, i there any issue with swap space to increase the memory to prevent it from hanging.

dadoonet commented 5 years ago

Not sure for Kibana but for sure you should disable swap for elasticsearch. May be start with less memory than 8gb.

Neel-Gagan commented 5 years ago

i disabled the space for elasticsearch and also implemented NODE_OPTIONS="--max-old-space-size=4096" but still face the same issue of kibana getting hanged on querying the particular index.

dadoonet commented 5 years ago

Could also change discover:sampleSize (see previous post) to something like 10?

Neel-Gagan commented 5 years ago

GET /d_march_2019/_search?size=10 works fine but when i fire query GET /d_march_2019/_search { "query": { "match" : { "content" : "keyword" } } }

it takes a lot of time to get the response of the query but the same issue of hanging the whole of browser does not happen as when the query is fired from search texbox UI of Kibana. is there anyway i can dely the response time of query fired.

dadoonet commented 5 years ago

Could you format the logs/code with markdown or the code icon? That would help as I have to do that every time. It should appear like this:

# works fine 
GET /d_march_2019/_search?size=10 
# but when i fire query
GET /d_march_2019/_search
{
    "query": {
        "match" : {
            "content" : "keyword"
        }
    }
}

but when i fire query

Are you sending this query from Kibana dev console? Could you share the output?

dadoonet commented 4 years ago

No more information. Feel free to reopen/comment with new information if any.