ceph / cephfs-hadoop

cephfs-hadoop
GNU Lesser General Public License v2.1
57 stars 47 forks source link

CephFS HDFS clients failing to respond to cache pressure #31

Open hellertime opened 7 years ago

hellertime commented 7 years ago

I've been running a CephFS system for a while now (currently Ceph v0.94.7). This cluster is primarily used for HDFS access via Apache Spark using the cephfs-hadoop shim.

I've encountered frequent cases where the cephfs-hadoop based clients put the cluster into a HEALTH_WARN state with messages about the clients failing to respond to cache pressure.

I've only begun to start debugging this issue, but I wanted to start here and get an idea where I might need to focus my search. What can cause a cephfs client to misbehave like this? Is there some cephfs messages that might not be handled properly in this hdfs shim?

dotnwat commented 7 years ago

Hi @hellertime the cephfs-hadoop shim is a thin layer over the cephfs userspace client (that in turn is the basis for the fuse client) so it probably isn't the shim itself that is causing this issue, although hadoop can be fairly brutal with leaving file handles open.

Have you tried this in Jewel? @gregsfortytwo @jcsp thoughts?

gregsfortytwo commented 7 years ago

Yeah, the cache pressure messages frequently enough just mean that the client has so many files held open that it can't give up any when the MDS asks. I'm not sure if the HDFS shim is more (unnecessarily) prone to that than other interfaces or not.

dotnwat commented 7 years ago

@hellertime I haven't used spark on the hadoop-shim, but in the past I found with a little poking around in hadoop or a hadoop application I could find some dangling open file descriptors. one way to do this is to collect a client log and then look for files that accumulate opens without closes which can narrow down the parts of the hadoop application that have the open files.