Open XimfengYao opened 1 year ago
It seems that you need to use a higher version of jdk,such as jdk11
datasophon v1.2.0 run in jdk1.8_u333。i don't chang it. so can i change it?
hbase requires a higher version of jdk, and you can have hbase use a higher version of jdk
sorry,it run error in openjdk11.that's error:
2023-11-09 14:50:50,542 INFO [main] http.SecurityHeadersFilter: Added security headers filter
2023-11-09 14:50:50,563 INFO [main] handler.ContextHandler: Started o.a.h.t.o.e.j.w.WebAppContext@231cdda8{master,/,file:///opt/datasophon/hbase-2.4.16/hbase-webapps/master/,AVAILABLE}{file:/opt/datasophon/hbase-2.4.16/hbase-webapps/master}
2023-11-09 14:50:50,590 INFO [main] server.AbstractConnector: Started ServerConnector@195113de{HTTP/1.1, (http/1.1)}{0.0.0.0:16010}
2023-11-09 14:50:50,590 INFO [main] server.Server: Started @4845ms
2023-11-09 14:50:50,594 INFO [main] master.HMaster: hbase.rootdir=hdfs://nameservice1/hbase, hbase.cluster.distributed=true
2023-11-09 14:50:50,621 INFO [master/ddh1:16000:becomeActiveMaster] master.HMaster: Adding backup master ZNode /hbase/backup-masters/ddh1,16000,1699512647665
2023-11-09 14:50:50,690 INFO [master/ddh1:16000:becomeActiveMaster] master.ActiveMasterManager: Another master is the active master, ddh2,16000,1699512645601; waiting to become the next active master
2023-11-09 14:51:06,449 WARN [prometheus-http-1-2] util.FSUtils: Cluster ID file does not exist at hdfs://nameservice1/hbase/hbase.id
2023-11-09 14:51:19,265 INFO [master/ddh1:16000:becomeActiveMaster] master.ActiveMasterManager: Deleting ZNode for /hbase/backup-masters/ddh1,16000,1699512647665 from backup master directory
2023-11-09 14:51:19,270 INFO [master/ddh1:16000:becomeActiveMaster] master.ActiveMasterManager: Registered as active master=ddh1,16000,1699512647665
2023-11-09 14:51:19,275 INFO [master/ddh1:16000:becomeActiveMaster] regionserver.ChunkCreator: Allocating data MemStoreChunkPool with chunk size 2 MB, max count 1421, initial count 0
2023-11-09 14:51:19,277 INFO [master/ddh1:16000:becomeActiveMaster] regionserver.ChunkCreator: Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 1579, initial count 0
2023-11-09 14:51:19,449 INFO [Thread-20] hdfs.DataStreamer: Exception in createBlockOutputStream
java.io.IOException: Invalid token in javax.security.sasl.qop: DI
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:553)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:455)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:298)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1705)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1655)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 14:51:19,450 WARN [Thread-20] hdfs.DataStreamer: Abandoning BP-676574098-192.168.0.81-1699408844223:blk_1073742933_2109
2023-11-09 14:51:19,458 WARN [Thread-20] hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[192.168.0.80:1026,DS-4a4fd1ea-3c8b-4cf7-b5a4-b4ae232b65fe,DISK]
2023-11-09 14:51:19,476 INFO [Thread-20] hdfs.DataStreamer: Exception in createBlockOutputStream
java.io.IOException: Invalid token in javax.security.sasl.qop: D
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:553)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:455)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:298)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1705)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1655)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 14:51:19,476 WARN [Thread-20] hdfs.DataStreamer: Abandoning BP-676574098-192.168.0.81-1699408844223:blk_1073742934_2110
2023-11-09 14:51:19,480 WARN [Thread-20] hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[192.168.0.81:1026,DS-d0a7f431-5561-4293-8fd3-db299c7387d9,DISK]
2023-11-09 14:51:19,492 INFO [Thread-20] hdfs.DataStreamer: Exception in createBlockOutputStream
java.io.IOException: Invalid token in javax.security.sasl.qop:
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:553)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:455)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:298)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1705)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1655)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 14:51:19,493 WARN [Thread-20] hdfs.DataStreamer: Abandoning BP-676574098-192.168.0.81-1699408844223:blk_1073742935_2111
2023-11-09 14:51:19,496 WARN [Thread-20] hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[192.168.0.82:1026,DS-894ec179-eaeb-4ded-8acc-c14289e5376c,DISK]
2023-11-09 14:51:19,510 WARN [Thread-20] hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2315)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2960)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:904)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540)
at org.apache.hadoop.ipc.Client.call(Client.java:1486)
at org.apache.hadoop.ipc.Client.call(Client.java:1385)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
at com.sun.proxy.$Proxy20.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:448)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy21.addBlock(Unknown Source)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361)
at com.sun.proxy.$Proxy22.addBlock(Unknown Source)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361)
at com.sun.proxy.$Proxy22.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1846)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1645)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 14:51:20,844 INFO [zk-event-processor-pool-0] zookeeper.ZKLeaderManager: Leader change, but no new leader found
2023-11-09 14:51:20,849 INFO [zk-event-processor-pool-0] zookeeper.ZKLeaderManager: Found new leader for znode: /hbase/tokenauth/keymaster
2023-11-09 14:51:21,342 WARN [prometheus-http-1-3] util.FSUtils: Cluster ID file does not exist at hdfs://nameservice1/hbase/hbase.id
2023-11-09 14:51:29,553 INFO [Thread-22] hdfs.DataStreamer: Exception in createBlockOutputStream
java.io.IOException: Invalid token in javax.security.sasl.qop: DI
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:553)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:455)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:298)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1705)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1655)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 14:51:29,553 WARN [Thread-22] hdfs.DataStreamer: Abandoning BP-676574098-192.168.0.81-1699408844223:blk_1073742936_2112
2023-11-09 14:51:29,557 WARN [Thread-22] hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[192.168.0.80:1026,DS-4a4fd1ea-3c8b-4cf7-b5a4-b4ae232b65fe,DISK]
2023-11-09 14:51:29,568 INFO [Thread-22] hdfs.DataStreamer: Exception in createBlockOutputStream
java.io.IOException: Invalid token in javax.security.sasl.qop: D
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:553)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:455)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:298)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1705)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1655)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 14:51:29,568 WARN [Thread-22] hdfs.DataStreamer: Abandoning BP-676574098-192.168.0.81-1699408844223:blk_1073742937_2113
2023-11-09 14:51:29,573 WARN [Thread-22] hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[192.168.0.81:1026,DS-d0a7f431-5561-4293-8fd3-db299c7387d9,DISK]
2023-11-09 14:51:29,582 INFO [Thread-22] hdfs.DataStreamer: Exception in createBlockOutputStream
java.io.IOException: Invalid token in javax.security.sasl.qop:
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:553)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:455)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:298)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1705)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1655)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 14:51:29,582 WARN [Thread-22] hdfs.DataStreamer: Abandoning BP-676574098-192.168.0.81-1699408844223:blk_1073742938_2114
2023-11-09 14:51:29,586 WARN [Thread-22] hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[192.168.0.82:1026,DS-894ec179-eaeb-4ded-8acc-c14289e5376c,DISK]
2023-11-09 14:51:29,589 WARN [Thread-22] hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2315)
at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2960)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:904)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540)
at org.apache.hadoop.ipc.Client.call(Client.java:1486)
at org.apache.hadoop.ipc.Client.call(Client.java:1385)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
at com.sun.proxy.$Proxy20.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:448)
at jdk.internal.reflect.GeneratedMethodAccessor45.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
at com.sun.proxy.$Proxy21.addBlock(Unknown Source)
at jdk.internal.reflect.GeneratedMethodAccessor45.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361)
at com.sun.proxy.$Proxy22.addBlock(Unknown Source)
at jdk.internal.reflect.GeneratedMethodAccessor45.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361)
at com.sun.proxy.$Proxy22.addBlock(Unknown Source)
at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1846)
at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1645)
at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 14:51:36,329 WARN [prometheus-http-1-4] util.FSUtils: Cluster ID file does not exist at hdfs://nameservice1/hbase/hbase.id
2023-11-09 14:53:42,787 INFO [main] master.HMaster: STARTING service HMaster
2023-11-09 14:53:42,788 INFO [main] util.VersionInfo: HBase 2.4.16
2023-11-09 14:53:42,789 INFO [main] util.VersionInfo: Source code repository git://17342ca4031d/home/zhangduo/hbase-rm/output/hbase revision=d1714710877653691e2125bd94b68a5b484a3a06
2023-11-09 14:53:42,789 INFO [main] util.VersionInfo: Compiled by zhangduo on Wed Feb 1 09:46:35 UTC 2023
2023-11-09 14:53:42,789 INFO [main] util.VersionInfo: From source with checksum 1ca7bcc2d1de1933beaeb5a1c380582712f11ed1bb1863308703335f7e230127010b1836d4b73df8f5a3baf6bbe4b33dbf7fcec2b28512d7acf5055d00d0c06b
2023-11-09 14:53:42,930 INFO [main] util.ServerCommandLine: hbase.tmp.dir: /tmp/hbase-hbase
2023-11-09 14:53:42,930 INFO [main] util.ServerCommandLine: hbase.rootdir: /hbase
2023-11-09 14:53:42,930 INFO [main] util.ServerCommandLine: hbase.cluster.distributed: true
2023-11-09 14:53:42,930 INFO [main] util.ServerCommandLine: hbase.zookeeper.quorum: ddh1:2181,ddh2:2181,ddh3:2181
2023-11-09 14:53:42,935 INFO [main] util.ServerCommandLine: env:HBASE_LOGFILE=hbase-hbase-master-ddh3.log
2023-11-09 14:53:42,935 INFO [main] util.ServerCommandLine: env:PATH=/sbin:/bin:/usr/sbin:/usr/bin
2023-11-09 14:53:42,935 INFO [main] util.ServerCommandLine: env:HBASE_PID_DIR=/opt/datasophon/hbase-2.4.16/pid
2023-11-09 14:53:42,936 INFO [main] util.ServerCommandLine: env:HISTSIZE=1000
2023-11-09 14:53:42,936 INFO [main] util.ServerCommandLine: env:JAVA_HOME=/usr/lib/jvm/java-11-openjdk-11.0.20.0.8-1.el7_9.x86_64
2023-11-09 14:53:42,936 INFO [main] util.ServerCommandLine: env:TERM=unknown
2023-11-09 14:53:42,936 INFO [main] util.ServerCommandLine: env:LANG=zh_CN.UTF-8
2023-11-09 14:53:42,936 INFO [main] util.ServerCommandLine: env:XDG_SESSION_ID=c32
2023-11-09 14:53:42,936 INFO [main] util.ServerCommandLine: env:SUDO_USER=root
2023-11-09 14:53:42,936 INFO [main] util.ServerCommandLine: env:SUDO_GID=0
2023-11-09 14:53:42,936 INFO [main] util.ServerCommandLine: env:MAIL=/var/spool/mail/root
2023-11-09 14:53:42,936 INFO [main] util.ServerCommandLine: env:USERNAME=hbase
2023-11-09 14:53:42,936 INFO [main] util.ServerCommandLine: env:LOGNAME=hbase
2023-11-09 14:53:42,936 INFO [main] util.ServerCommandLine: env:JVM_PID=19777
2023-11-09 14:53:42,936 INFO [main] util.ServerCommandLine: env:HBASE_REST_OPTS=
2023-11-09 14:53:42,936 INFO [main] util.ServerCommandLine: env:PWD=/opt/datasophon/hbase-2.4.16
2023-11-09 14:53:42,936 INFO [main] util.ServerCommandLine: env:SUDO_UID=0
2023-11-09 14:53:42,936 INFO [main] util.ServerCommandLine: env:HBASE_ROOT_LOGGER=INFO,RFA
2023-11-09 14:53:42,936 INFO [main] util.ServerCommandLine: env:SHELL=/bin/bash
2023-11-09 14:53:42,936 INFO [main] util.ServerCommandLine: env:HBASE_ENV_INIT=true
2023-11-09 14:53:42,937 INFO [main] util.ServerCommandLine: env:HBASE_IDENT_STRING=hbase
2023-11-09 14:53:42,937 INFO [main] util.ServerCommandLine: env:HBASE_ZNODE_FILE=/opt/datasophon/hbase-2.4.16/pid/hbase-hbase-master.znode
2023-11-09 14:53:42,937 INFO [main] util.ServerCommandLine: env:HBASE_LOG_PREFIX=hbase-hbase-master-ddh3
2023-11-09 14:53:42,937 INFO [main] util.ServerCommandLine: env:HBASE_LOG_DIR=/opt/datasophon/hbase-2.4.16/bin/../logs
2023-11-09 14:53:42,937 INFO [main] util.ServerCommandLine: env:USER=hbase
2023-11-09 14:53:42,937 INFO [main] util.ServerCommandLine: 1/pfl-asm-4.0.1.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/pfl-basic-4.0.1.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/pfl-basic-tools-4.0.1.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/pfl-dynamic-4.0.1.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/pfl-tf-4.0.1.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/pfl-tf-tools-4.0.1.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/policy-2.7.6.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/release-documentation-2.3.2-docbook.zip:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/saaj-impl-1.5.1.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/samples-2.3.2.zip:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/sdo-eclipselink-plugin-2.3.2.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/stax-ex-1.8.1.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/streambuffer-1.5.7.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/txw2-2.3.2.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/client-facing-thirdparty/slf4j-reload4j-1.7.33.jar
2023-11-09 14:53:42,937 INFO [main] util.ServerCommandLine: env:SUDO_COMMAND=/bin/bash bin/hbase-daemon.sh start master
2023-11-09 14:53:42,937 INFO [main] util.ServerCommandLine: env:HBASE_AUTOSTART_FILE=/opt/datasophon/hbase-2.4.16/pid/hbase-hbase-master.autostart
2023-11-09 14:53:42,937 INFO [main] util.ServerCommandLine: env:SED=sed
2023-11-09 14:53:42,937 INFO [main] util.ServerCommandLine: env:HOSTNAME=ddh3
2023-11-09 14:53:42,937 INFO [main] util.ServerCommandLine: env:GREP=grep
2023-11-09 14:53:42,937 INFO [main] util.ServerCommandLine: env:HBASE_NICENESS=0
2023-11-09 14:53:42,938 INFO [main] util.ServerCommandLine: env:HBASE_OPTS= -XX:+UseConcMarkSweepGC -Djava.security.auth.login.config=/opt/datasophon/hbase-2.4.16/conf/zk-jaas.conf -Djava.util.logging.config.class=org.apache.hadoop.hbase.logging.JulToSlf4jInitializer -javaagent:/opt/datasophon/hbase-2.4.16/bin/../jmx/jmx_prometheus_javaagent-0.16.1.jar=16100:/opt/datasophon/hbase-2.4.16/bin/../jmx/hbase_jmx_config.yaml -Dhbase.log.dir=/opt/datasophon/hbase-2.4.16/bin/../logs -Dhbase.log.file=hbase-hbase-master-ddh3.log -Dhbase.home.dir=/opt/datasophon/hbase-2.4.16/bin/.. -Dhbase.id.str=hbase -Dhbase.root.logger=INFO,RFA -Dhbase.security.logger=INFO,RFAS
2023-11-09 14:53:42,938 INFO [main] util.ServerCommandLine: env:HBASE_SECURITY_LOGGER=INFO,RFAS
2023-11-09 14:53:42,938 INFO [main] util.ServerCommandLine: env:XDG_RUNTIME_DIR=/run/user/1005
2023-11-09 14:53:42,938 INFO [main] util.ServerCommandLine: env:HBASE_THRIFT_OPTS=
2023-11-09 14:53:42,938 INFO [main] util.ServerCommandLine: env:HBASE_HOME=/opt/datasophon/hbase-2.4.16/bin/..
2023-11-09 14:53:42,938 INFO [main] util.ServerCommandLine: env:HOME=/home/hbase
2023-11-09 14:53:42,938 INFO [main] util.ServerCommandLine: env:SHLVL=2
2023-11-09 14:53:42,938 INFO [main] util.ServerCommandLine: env:MALLOC_ARENA_MAX=4
2023-11-09 14:53:42,938 INFO [main] util.ServerCommandLine: vmName=OpenJDK 64-Bit Server VM, vmVendor=Red Hat, Inc., vmVersion=11.0.20+8-LTS
2023-11-09 14:53:42,938 INFO [main] util.ServerCommandLine: vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, -XX:+UseConcMarkSweepGC, -Djava.security.auth.login.config=/opt/datasophon/hbase-2.4.16/conf/zk-jaas.conf, -Djava.util.logging.config.class=org.apache.hadoop.hbase.logging.JulToSlf4jInitializer, -javaagent:/opt/datasophon/hbase-2.4.16/bin/../jmx/jmx_prometheus_javaagent-0.16.1.jar=16100:/opt/datasophon/hbase-2.4.16/bin/../jmx/hbase_jmx_config.yaml, -Dhbase.log.dir=/opt/datasophon/hbase-2.4.16/bin/../logs, -Dhbase.log.file=hbase-hbase-master-ddh3.log, -Dhbase.home.dir=/opt/datasophon/hbase-2.4.16/bin/.., -Dhbase.id.str=hbase, -Dhbase.root.logger=INFO,RFA, -Dhbase.security.logger=INFO,RFAS]
2023-11-09 14:53:43,327 INFO [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
2023-11-09 14:53:43,367 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2023-11-09 14:53:43,788 INFO [main] regionserver.RSRpcServices: master/ddh3:16000 server-side Connection retries=45
2023-11-09 14:53:43,814 INFO [main] ipc.RpcExecutor: Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=3, maxQueueLength=300, handlerCount=30
2023-11-09 14:53:43,816 INFO [main] ipc.RpcExecutor: Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=300, handlerCount=20
2023-11-09 14:53:43,816 INFO [main] ipc.RWQueueRpcExecutor: priority.RWQ.Fifo writeQueues=1 writeHandlers=2 readQueues=1 readHandlers=18 scanQueues=0 scanHandlers=0
2023-11-09 14:53:43,816 INFO [main] ipc.RpcExecutor: Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=300, handlerCount=3
2023-11-09 14:53:43,816 INFO [main] ipc.RpcExecutor: Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=300, handlerCount=1
Perhaps you can use hbase version 2.0.2 to replace version 2.4.16
Search before asking
What happened
version:1.2.0 problem: hadoop in kerberos,hbase also in kerberos.but HbaseMaster run error. HbaseMaster's errors:
namenode's errors:
datanode's errors:
What you expected to happen
all errors in what happened.
How to reproduce
open enableKerberos in hbase,hdfs
Anything else
no
Version
main
Are you willing to submit PR?
Code of Conduct