Open Aline1994 opened 6 years ago
if guys have the same problem and fixed it,please tell me how to solve this problem
Not sure if you're still stuck on this, but that error reads like the HBase table OpenTSDB is looking for isn't available. Did you set up the tables properly?
same here...
Just commenting here in case anyone else has this issue. I had the same error even though I had already run the create_table.sh script and it appeared to complete Then once I started OpenTSDB it acted as if I had not done this . . I ran the script again as outlined here
env COMPRESSION=NONE HBASE_HOME=/opt/hbase/hbase-2.4.15 ./src/create_table.sh
And then mysteriously it worked the next time I tried to run OpenTSDB ?!?
. . . 13:16:03.429 [main] INFO net.opentsdb.tools.TSDMain - Ready to serve on /0.0.0.0:4242
This does not inspire confidence, so I stopped HBASE and re-started and then ran OpenTSDB again and it still works. Maybe someone else can illuminate why this would occur, but running the create_tables script a second time, seems to fix it
Exception in thread "main" java.lang.RuntimeException: Initialization failed at net.opentsdb.tools.TSDMain.main(TSDMain.java:237) Caused by: com.stumbleupon.async.DeferredGroupException: At least one of the Deferreds failed, first exception: at com.stumbleupon.async.DeferredGroup.done(DeferredGroup.java:169) at com.stumbleupon.async.DeferredGroup.recordCompletion(DeferredGroup.java:142) at com.stumbleupon.async.DeferredGroup.access$000(DeferredGroup.java:36) at com.stumbleupon.async.DeferredGroup$1Notify.call(DeferredGroup.java:82) at com.stumbleupon.async.Deferred.doCall(Deferred.java:1278) at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257) at com.stumbleupon.async.Deferred.access$300(Deferred.java:430) at com.stumbleupon.async.Deferred$Continue.call(Deferred.java:1366) at com.stumbleupon.async.Deferred.doCall(Deferred.java:1278) at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257) at com.stumbleupon.async.Deferred.access$300(Deferred.java:430) at com.stumbleupon.async.Deferred$Continue.call(Deferred.java:1366) at com.stumbleupon.async.Deferred.doCall(Deferred.java:1278) at com.stumbleupon.async.Deferred.runCallbacks(Deferred.java:1257) at com.stumbleupon.async.Deferred.callback(Deferred.java:1005) at org.hbase.async.HBaseRpc.callback(HBaseRpc.java:712) at org.hbase.async.HBaseClient.tooManyAttempts(HBaseClient.java:2058) at org.hbase.async.HBaseClient.handleNSRE(HBaseClient.java:2848) at org.hbase.async.RegionClient.decode(RegionClient.java:1527) at org.hbase.async.RegionClient.decode(RegionClient.java:88) at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:500) at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.hbase.async.RegionClient.handleUpstream(RegionClient.java:1223) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.SimpleChannelHandler.messageReceived(SimpleChannelHandler.java:142) at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88) at org.jboss.netty.handler.timeout.IdleStateAwareChannelHandler.handleUpstream(IdleStateAwareChannelHandler.java:36) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.handler.timeout.IdleStateHandler.messageReceived(IdleStateHandler.java:294) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.hbase.async.HBaseClient$RegionClientPipeline.sendUpstream(HBaseClient.java:3121) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: org.hbase.async.NonRecoverableException: Too many attempts: Exists(table="HBASE_ADMIN:HBASE_MONITOR_TSDB_TREE", key=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60], family=null, qualifiers=null, attempt=11, region=RegionInfo(table="HBASE_MONITOR_TSDB_TREE", region_name="HBASE_ADMIN:HBASE_MONITOR_TSDB_TREE,,1531799980364.7634a80a7a992bdeeeeb6303b20d6090.", stop_key="")) at org.hbase.async.HBaseClient.tooManyAttempts(HBaseClient.java:2056) ... 31 more Caused by: org.hbase.async.NotServingRegionException: org.apache.hadoop.hbase.NotServingRegionException: Region HBASE_ADMIN:HBASE_MONITOR_TSDB_TREE,,1531799980364.7634a80a7a992bdeeeeb6303b20d6090. is not online on bigdata-hbase-newpre02.gz01.diditaxi.com,60020,1535436514712 at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegionByEncodedName(HRegionServer.java:2777) at org.apache.hadoop.hbase.regionserver.HRegionServer.getRegion(HRegionServer.java:4445) at org.apache.hadoop.hbase.regionserver.HRegionServer.get(HRegionServer.java:2909) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32488) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2301) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32488) at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2301) at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:104) at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133) at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108) at java.lang.Thread.run(Thread.java:745)
Caused by RPC: Exists(table="HBASE_ADMIN:HBASE_MONITOR_TSDB_TREE", key=[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 58, 65, 115, 121, 110, 99, 72, 66, 97, 115, 101, 126, 112, 114, 111, 98, 101, 126, 60, 59, 95, 60], family=null, qualifiers=null, attempt=0, region=RegionInfo(table="HBASE_MONITOR_TSDB_TREE", region_name="HBASE_ADMIN:HBASE_MONITOR_TSDB_TREE,,1531799980364.7634a80a7a992bdeeeeb6303b20d6090.", stop_key="")) at org.hbase.async.NotServingRegionException.make(NotServingRegionException.java:72) at org.hbase.async.NotServingRegionException.make(NotServingRegionException.java:33) at org.hbase.async.RegionClient.makeException(RegionClient.java:1753) at org.hbase.async.RegionClient.decodeException(RegionClient.java:1773) at org.hbase.async.RegionClient.decode(RegionClient.java:1485) ... 29 more