rancher / catalog-dockerfiles

Dockerfiles for Rancher Catalog containers
Apache License 2.0
131 stars 102 forks source link

elasticsearch using gli #56

Open bonovoxly opened 8 years ago

bonovoxly commented 8 years ago

when using glusterfs as the data volume, I see the following:

4/3/2016 8:53:55 AM[2016-04-03 12:53:55,083][WARN ][cluster.action.shard ] [Nicole St. Croix] [.kibana][0] received shard failed for target shard [[.kibana][0], node[MlquNusiR2O-9Lc5x2dLeQ], [P], v[1], s[INITIALIZING], a[id=j_vQF1yPRMWPQR56c3Th_w], unassigned_info[[reason=INDEX_CREATED], at[2016-04-03T12:53:50.855Z]]], indexUUID [_4mgkEHzRxawb_2dE3vihA], message [failed recovery], failure [IndexShardRecoveryException[failed recovery]; nested: AlreadyClosedException[Underlying file changed by an external force at 2016-04-03T12:53:54.102783Z, (lock=NativeFSLock(path=/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/.kibana/0/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid],ctime=2016-04-03T12:53:54.102783Z))]; ] 4/3/2016 8:53:55 AM[.kibana][[.kibana][0]] IndexShardRecoveryException[failed recovery]; nested: AlreadyClosedException[Underlying file changed by an external force at 2016-04-03T12:53:54.102783Z, (lock=NativeFSLock(path=/usr/share/elasticsearch/data/elasticsearch/nodes/0/indices/.kibana/0/index/write.lock,impl=sun.nio.ch.FileLockImpl[0:9223372036854775807 exclusive valid],ctime=2016-04-03T12:53:54.102783Z))]; 4/3/2016 8:53:55 AM at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:179)

It appears to be a similar issue to https://www.gluster.org/pipermail/gluster-users/2015-September/023676.html

I also created a forum post about it. https://forums.rancher.com/t/glusterfs-and-elasticsearch/2293

Wondering if anyone else has seen this or knows of a workaround. I've considered using the setting recommended here: https://www.gluster.org/pipermail/gluster-users/2015-September/023676.html , (gluster volume set cluster.consistent-metadata on), but it might not be supported yet.

SvenAbels commented 8 years ago

Did you find a solution for this? I experience the same problem

bonovoxly commented 8 years ago

I have not.

fabiomartinelli commented 6 years ago

Same issue with ES/GlusterFS here, in 2018!