Huawei-Spark / Spark-SQL-on-HBase

Native, optimized access to HBase Data through Spark SQL/Dataframe Interfaces
Apache License 2.0
321 stars 164 forks source link

coprocessor CheckDirService not found #2

Closed secfree closed 9 years ago

secfree commented 9 years ago

每次进入 hbase-sql 后, 执行的第一条 sql 查询, 会 hang 10 分钟左右, 然后输出:

15/07/30 17:23:20 WARN CoprocessorRpcChannel: Call failed on IOException
org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=35, exceptions:
Thu Jul 30 17:14:07 CST 2015, org.apache.hadoop.hbase.client.RpcRetryingCaller@6d9c7e9b, org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.UnknownProtocolException): org.apache.hadoop.hbase.exceptions.UnknownProtocolException: No registered coprocessor service found for name CheckDirService in region metadata,,1438221371472.33a2b7cbaab1f126dbff444b6c11e4da.
        at org.apache.hadoop.hbase.regionserver.HRegion.execService(HRegion.java:5884)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.execServiceOnRegion(HRegionServer.java:3464)
        at org.apache.hadoop.hbase.regionserver.HRegionServer.execService(HRegionServer.java:3446)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:30950)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2093)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:107)
        at java.lang.Thread.run(Thread.java:745)

CheckDirService 在源码中有定义 proto, 但是没有单独的 jar 生成.

难道需要将 spark-sql-on-hbase-1.0.0.jar 上传到 hbase 的 master 和每个 regionserver ? 这个 jar 文件很大, 如果是这样的话, 有点不方便.

Thanks.

yzhou2001 commented 9 years ago

Yes, the jar needs to be available to HBase master and region servers. Otherwise the coprocessor/custom filter can't be used. We may want to lower the # of retries, 35, to a more reasonable value.

In addition, I recommend you to write github entries in English to maximize the exposure of them.

secfree commented 9 years ago

I recommend you to write github entries in English to maximize the exposure of them.

Thanks for your advice.

The attempts num is a parameter which can be controlled by client.

I add

 <property>
    <name>hbase.client.retries.number</name>
    <value>3</value>
  </property>

to hbase-site.xml under $SPARK_HOME/conf/, then the hbase-sql can give a quick response when I query the RegionServer.

secfree commented 9 years ago
  1. I add spark-sql-on-hbase-1.0.0.jar to HBase's master and every RegionServer.
  2. I set export HBASE_CLASSPATH=/.../spark-sql-on-hbase-1.0.0.jar in hbase-env.sh

The CheckDirService coprocessor stay not registered.

I then add

<property>
    <name>hbase.coprocessor.region.classes</name>
    <value>org.apache.spark.sql.hbase.CheckDirProto</value>
 </property>

to hbase-site.xml, and the HBase's log indicates that I supplied the wrong value.

What should I set in hbase-site.xml ?

Thanks.

xinyunh commented 9 years ago

Hi @secfree,

Have you deployed the configured files, hbase-site.xml and hbase-env.sh, to each regionserver?

secfree commented 9 years ago

Yes, I sync all nodes' configure file with scripts.

This's is the log:

coprocessor.CoprocessorHost: The coprocessor org.apache.spark.sql.hbase.CheckDirProtos threw java.io.IOException: Configured class org.apache.spark.sql.hbase.CheckDirProtos must implement org.apache.hadoop.hbase.Coprocessor interface

Use value org.apache.spark.sql.hbase.CheckDirService will lead a "ClassNotFoundException".

Should I add the hbase.coprocessor.region.classes configure in hbase-site.xml ?

xinyunh commented 9 years ago

Hi @secfree,

No, you don't need to. I will check it later.

xinyunh commented 9 years ago

Hi @secfree,

It seems that you registered wrong class. :) Please change it to

hbase.coprocessor.region.classes org.apache.spark.sql.hbase.CheckDirEndPointImpl
secfree commented 9 years ago

Yes, it's OK now.

Thanks.