Open jainshashank24 opened 3 years ago
@jainshashank24 : Facing a similar issue, did it work in client mode? Which shc-core version are you using and what are the dependencies you added?
Thanks
Hi @beejay19 yes it did work in client mode but in cluster mode its not working I used this version of jar "shc-core-1.1.0.3.1.0.0-78.jar"
Spark application run in cluster mode not able to read and write from hbase on hdp 3.1 cluster. Job launches in yarn and in driver after sometime it shows the following error
2020-12-01 09:40:23 [DEBUG] [org.apache.hadoop.hbase.client.ConnectionImplementation:919] - locateRegionInMeta parentTable='hbase:meta', attempt=0 of 36 failed; retrying after sleep of 36 org.apache.hadoop.hbase.client.RetriesExhaustedException: Failed after attempts=36, exceptions: Tue Dec 01 09:40:23 UTC 2020, null, java.net.SocketTimeoutException: callTimeout=60000, callDuration=60528: Call to hdp-slv-01.hadoop-store.back.christine.info/10.181.66.21:16020 failed on local exception: org.apache.hadoop.hbase.ipc.CallTimeoutException: Call id=0, waitTime=60227, rpcTimeout=59994 row 't,,99999999999999' on table 'hbase:meta' at region=hbase:meta,,1.1588230740, hostname=hdp-slv-01.hadoop-store.back.christine.info,16020,1606739459240, seqNum=-1
Can someone help in identifying the issue