datavane / datasophon

The next generation of cloud-native big data management expert , Aims to help users rapidly build stable, efficient, and scalable cloud-native platforms for big data.
https://datasophon.github.io/datasophon-website/
Apache License 2.0
1.08k stars 377 forks source link

[Bug] [Module Name] v2.4.16 hbase in kerberos did't run #445

Open XimfengYao opened 10 months ago

XimfengYao commented 10 months ago

Search before asking

What happened

version:1.2.0 problem: hadoop in kerberos,hbase also in kerberos.but HbaseMaster run error. HbaseMaster's errors:

2023-11-09 11:19:36,001 WARN  [Thread-27] hdfs.DataStreamer: Abandoning BP-676574098-192.168.0.81-1699408844223:blk_1073742568_1744
2023-11-09 11:19:36,005 WARN  [Thread-27] hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[192.168.0.81:1026,DS-d0a7f431-5561-4293-8fd3-db299c7387d9,DISK]
2023-11-09 11:19:36,008 WARN  [Thread-27] hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2315)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2960)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:904)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)

    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540)
    at org.apache.hadoop.ipc.Client.call(Client.java:1486)
    at org.apache.hadoop.ipc.Client.call(Client.java:1385)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
    at com.sun.proxy.$Proxy19.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:448)
    at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
    at com.sun.proxy.$Proxy20.addBlock(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361)
    at com.sun.proxy.$Proxy21.addBlock(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361)
    at com.sun.proxy.$Proxy21.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1846)
    at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1645)
    at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 11:19:36,013 ERROR [master/ddh1:16000:becomeActiveMaster] master.HMaster: Failed to become active master
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2315)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2960)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:904)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)

    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540)
    at org.apache.hadoop.ipc.Client.call(Client.java:1486)
    at org.apache.hadoop.ipc.Client.call(Client.java:1385)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
    at com.sun.proxy.$Proxy19.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:448)
    at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
    at com.sun.proxy.$Proxy20.addBlock(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361)
    at com.sun.proxy.$Proxy21.addBlock(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361)
    at com.sun.proxy.$Proxy21.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1846)
    at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1645)
    at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 11:19:36,014 ERROR [master/ddh1:16000:becomeActiveMaster] master.HMaster: ***** ABORTING master ddh1,16000,1699499942514: Unhandled exception. Starting shutdown. *****
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2315)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2960)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:904)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)

    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540)
    at org.apache.hadoop.ipc.Client.call(Client.java:1486)
    at org.apache.hadoop.ipc.Client.call(Client.java:1385)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
    at com.sun.proxy.$Proxy19.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:448)
    at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
    at com.sun.proxy.$Proxy20.addBlock(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361)
    at com.sun.proxy.$Proxy21.addBlock(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361)
    at com.sun.proxy.$Proxy21.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1846)
    at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1645)
    at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 11:19:36,014 INFO  [master/ddh1:16000:becomeActiveMaster] regionserver.HRegionServer: ***** STOPPING region server 'ddh1,16000,1699499942514' *****
2023-11-09 11:19:36,014 INFO  [master/ddh1:16000:becomeActiveMaster] regionserver.HRegionServer: STOPPED: Stopped by master/ddh1:16000:becomeActiveMaster
2023-11-09 11:19:38,429 INFO  [master/ddh1:16000] ipc.NettyRpcServer: Stopping server on /192.168.0.80:16000
2023-11-09 11:19:38,430 INFO  [master/ddh1:16000] token.AuthenticationTokenSecretManager: Stopping leader election, because: SecretManager stopping
2023-11-09 11:19:38,440 INFO  [master/ddh1:16000] regionserver.HRegionServer: Stopping infoServer
2023-11-09 11:19:38,450 INFO  [master/ddh1:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.w.WebAppContext@788ddc1f{master,/,null,STOPPED}{file:/opt/datasophon/hbase-2.4.16/hbase-webapps/master}
2023-11-09 11:19:38,458 INFO  [master/ddh1:16000] server.AbstractConnector: Stopped ServerConnector@261ea657{HTTP/1.1, (http/1.1)}{0.0.0.0:16010}
2023-11-09 11:19:38,459 INFO  [master/ddh1:16000] server.session: node0 Stopped scavenging
2023-11-09 11:19:38,460 INFO  [master/ddh1:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.s.ServletContextHandler@109a2025{static,/static,file:///opt/datasophon/hbase-2.4.16/hbase-webapps/static/,STOPPED}
2023-11-09 11:19:38,460 INFO  [master/ddh1:16000] handler.ContextHandler: Stopped o.a.h.t.o.e.j.s.ServletContextHandler@6f3e19b3{logs,/logs,file:///opt/datasophon/hbase-2.4.16/logs/,STOPPED}
2023-11-09 11:19:38,471 INFO  [master/ddh1:16000] regionserver.HRegionServer: aborting server ddh1,16000,1699499942514
2023-11-09 11:19:38,471 INFO  [master/ddh1:16000] regionserver.HRegionServer: stopping server ddh1,16000,1699499942514; all regions closed.
2023-11-09 11:19:38,471 INFO  [master/ddh1:16000] hbase.ChoreService: Chore service for: master/ddh1:16000 had [] on shutdown
2023-11-09 11:19:38,474 WARN  [master/ddh1:16000] master.ActiveMasterManager: Failed get of master address: java.io.IOException: Can't get master address from ZooKeeper; znode data == null
2023-11-09 11:19:38,585 ERROR [main-EventThread] zookeeper.ClientCnxn: Error while calling watcher 
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@6378ecf4 rejected from java.util.concurrent.ThreadPoolExecutor@65b17a70[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 9]
    at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
    at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
    at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
    at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678)
    at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602)
    at org.apache.hadoop.hbase.zookeeper.PendingWatcher.process(PendingWatcher.java:38)
    at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535)
    at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
2023-11-09 11:19:38,586 ERROR [main-EventThread] zookeeper.ClientCnxn: Error while calling watcher 
java.util.concurrent.RejectedExecutionException: Task java.util.concurrent.FutureTask@34823a9b rejected from java.util.concurrent.ThreadPoolExecutor@65b17a70[Terminated, pool size = 0, active threads = 0, queued tasks = 0, completed tasks = 9]
    at java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:2063)
    at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:830)
    at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1379)
    at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
    at java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:678)
    at org.apache.hadoop.hbase.zookeeper.ZKWatcher.process(ZKWatcher.java:602)
    at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:535)
    at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:510)
2023-11-09 11:19:38,587 INFO  [main-EventThread] zookeeper.ClientCnxn: EventThread shut down for session: 0x2000025effe002a
2023-11-09 11:19:38,587 WARN  [Thread-8] zookeeper.Login: TGT renewal thread has been interrupted and will exit.
2023-11-09 11:19:38,587 INFO  [master/ddh1:16000] zookeeper.ZooKeeper: Session: 0x2000025effe002a closed
2023-11-09 11:19:38,587 INFO  [master/ddh1:16000] regionserver.HRegionServer: Exiting; stopping=ddh1,16000,1699499942514; zookeeper connection closed.
2023-11-09 11:19:38,588 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: HMaster Aborted
    at org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:254)
    at org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:145)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
    at org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:140)
    at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:2946)

namenode's errors:

2023-11-09 11:45:33,888 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not enough replicas was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1}
2023-11-09 11:45:33,888 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not enough replicas was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1}
2023-11-09 11:45:33,889 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742569_1745, replicas=192.168.0.80:1026, 192.168.0.81:1026, 192.168.0.82:1026 for /hbase/.tmp/hbase.version
2023-11-09 11:45:33,970 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not enough replicas was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1}
2023-11-09 11:45:33,971 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not enough replicas was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1}
2023-11-09 11:45:33,971 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not enough replicas was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1}
2023-11-09 11:45:33,971 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2023-11-09 11:45:33,971 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 1 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2023-11-09 11:45:33,971 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 1 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2023-11-09 11:45:33,971 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742570_1746, replicas=192.168.0.81:1026, 192.168.0.82:1026 for /hbase/.tmp/hbase.version
2023-11-09 11:45:33,992 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not enough replicas was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1}
2023-11-09 11:45:33,992 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not enough replicas was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1}
2023-11-09 11:45:33,992 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 2 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2023-11-09 11:45:33,992 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 2 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK, DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2023-11-09 11:45:33,992 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 2 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2023-11-09 11:45:33,993 INFO org.apache.hadoop.hdfs.StateChange: BLOCK* allocate blk_1073742571_1747, replicas=192.168.0.82:1026 for /hbase/.tmp/hbase.version
2023-11-09 11:45:34,007 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not enough replicas was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1}
2023-11-09 11:45:34,007 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Not enough replicas was chosen. Reason: {NO_REQUIRED_STORAGE_TYPE=1}
2023-11-09 11:45:34,007 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 3 to reach 3 (unavailableStorages=[], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) For more information, please enable DEBUG log level on org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy and org.apache.hadoop.net.NetworkTopology
2023-11-09 11:45:34,007 WARN org.apache.hadoop.hdfs.protocol.BlockStoragePolicy: Failed to place enough replicas: expected size is 3 but only 0 storage types can be selected (replication=3, selected=[], unavailable=[DISK], removed=[DISK, DISK, DISK], policy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]})
2023-11-09 11:45:34,007 WARN org.apache.hadoop.hdfs.server.blockmanagement.BlockPlacementPolicy: Failed to place enough replicas, still in need of 3 to reach 3 (unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}, newBlock=true) All required storage types are unavailable:  unavailableStorages=[DISK], storagePolicy=BlockStoragePolicy{HOT:7, storageTypes=[DISK], creationFallbacks=[], replicationFallbacks=[ARCHIVE]}
2023-11-09 11:45:34,008 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on default port 8020, call Call#12 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.addBlock from 192.168.0.80:46654
java.io.IOException: File /hbase/.tmp/hbase.version could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2315)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2960)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:904)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)
2023-11-09 11:45:38,410 INFO SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for dn/ddh2@HADOOP.COM (auth:KERBEROS) from 192.168.0.81:36755
2023-11-09 11:45:38,425 INFO SecurityLogger.org.apache.hadoop.security.authorize.ServiceAuthorizationManager: Authorization successful for dn/ddh2@HADOOP.COM (auth:KERBEROS) for protocol=interface org.apache.hadoop.ha.HAServiceProtocol

datanode's errors:

2023-11-09 11:19:25,839 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ddh1:1026:DataXceiver error processing unknown operation  src: /192.168.0.80:43990 dst: /192.168.0.80:1026
javax.security.sasl.SaslException: Invalid token in javax.security.sasl.qop: DI
    at com.sun.security.sasl.util.AbstractSaslImpl.parseProp(AbstractSaslImpl.java:242)
    at com.sun.security.sasl.util.AbstractSaslImpl.parseQop(AbstractSaslImpl.java:206)
    at com.sun.security.sasl.util.AbstractSaslImpl.parseQop(AbstractSaslImpl.java:197)
    at com.sun.security.sasl.util.AbstractSaslImpl.<init>(AbstractSaslImpl.java:73)
    at com.sun.security.sasl.digest.DigestMD5Base.<init>(DigestMD5Base.java:174)
    at com.sun.security.sasl.digest.DigestMD5Server.<init>(DigestMD5Server.java:145)
    at com.sun.security.sasl.digest.FactoryImpl.createSaslServer(FactoryImpl.java:109)
    at org.apache.hadoop.security.FastSaslServerFactory.createSaslServer(FastSaslServerFactory.java:64)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslParticipant.createServerSaslParticipant(SaslParticipant.java:84)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:387)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getSaslStreams(SaslDataTransferServer.java:308)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:135)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
    at java.lang.Thread.run(Thread.java:750)
2023-11-09 11:19:35,942 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: ddh1:1026:DataXceiver error processing unknown operation  src: /192.168.0.80:44002 dst: /192.168.0.80:1026
javax.security.sasl.SaslException: Invalid token in javax.security.sasl.qop: DI
    at com.sun.security.sasl.util.AbstractSaslImpl.parseProp(AbstractSaslImpl.java:242)
    at com.sun.security.sasl.util.AbstractSaslImpl.parseQop(AbstractSaslImpl.java:206)
    at com.sun.security.sasl.util.AbstractSaslImpl.parseQop(AbstractSaslImpl.java:197)
    at com.sun.security.sasl.util.AbstractSaslImpl.<init>(AbstractSaslImpl.java:73)
    at com.sun.security.sasl.digest.DigestMD5Base.<init>(DigestMD5Base.java:174)
    at com.sun.security.sasl.digest.DigestMD5Server.<init>(DigestMD5Server.java:145)
    at com.sun.security.sasl.digest.FactoryImpl.createSaslServer(FactoryImpl.java:109)
    at org.apache.hadoop.security.FastSaslServerFactory.createSaslServer(FastSaslServerFactory.java:64)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslParticipant.createServerSaslParticipant(SaslParticipant.java:84)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:387)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getSaslStreams(SaslDataTransferServer.java:308)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:135)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
    at java.lang.Thread.run(Thread.java:750)

What you expected to happen

all errors in what happened.

How to reproduce

open enableKerberos in hbase,hdfs image image

Anything else

no

Version

main

Are you willing to submit PR?

Code of Conduct

datasophon commented 10 months ago

It seems that you need to use a higher version of jdk,such as jdk11

XimfengYao commented 10 months ago

datasophon v1.2.0 run in jdk1.8_u333。i don't chang it. so can i change it?

datasophon commented 10 months ago

hbase requires a higher version of jdk, and you can have hbase use a higher version of jdk

XimfengYao commented 10 months ago

sorry,it run error in openjdk11.that's error:

2023-11-09 14:50:50,542 INFO  [main] http.SecurityHeadersFilter: Added security headers filter
2023-11-09 14:50:50,563 INFO  [main] handler.ContextHandler: Started o.a.h.t.o.e.j.w.WebAppContext@231cdda8{master,/,file:///opt/datasophon/hbase-2.4.16/hbase-webapps/master/,AVAILABLE}{file:/opt/datasophon/hbase-2.4.16/hbase-webapps/master}
2023-11-09 14:50:50,590 INFO  [main] server.AbstractConnector: Started ServerConnector@195113de{HTTP/1.1, (http/1.1)}{0.0.0.0:16010}
2023-11-09 14:50:50,590 INFO  [main] server.Server: Started @4845ms
2023-11-09 14:50:50,594 INFO  [main] master.HMaster: hbase.rootdir=hdfs://nameservice1/hbase, hbase.cluster.distributed=true
2023-11-09 14:50:50,621 INFO  [master/ddh1:16000:becomeActiveMaster] master.HMaster: Adding backup master ZNode /hbase/backup-masters/ddh1,16000,1699512647665
2023-11-09 14:50:50,690 INFO  [master/ddh1:16000:becomeActiveMaster] master.ActiveMasterManager: Another master is the active master, ddh2,16000,1699512645601; waiting to become the next active master
2023-11-09 14:51:06,449 WARN  [prometheus-http-1-2] util.FSUtils: Cluster ID file does not exist at hdfs://nameservice1/hbase/hbase.id
2023-11-09 14:51:19,265 INFO  [master/ddh1:16000:becomeActiveMaster] master.ActiveMasterManager: Deleting ZNode for /hbase/backup-masters/ddh1,16000,1699512647665 from backup master directory
2023-11-09 14:51:19,270 INFO  [master/ddh1:16000:becomeActiveMaster] master.ActiveMasterManager: Registered as active master=ddh1,16000,1699512647665
2023-11-09 14:51:19,275 INFO  [master/ddh1:16000:becomeActiveMaster] regionserver.ChunkCreator: Allocating data MemStoreChunkPool with chunk size 2 MB, max count 1421, initial count 0
2023-11-09 14:51:19,277 INFO  [master/ddh1:16000:becomeActiveMaster] regionserver.ChunkCreator: Allocating index MemStoreChunkPool with chunk size 204.80 KB, max count 1579, initial count 0
2023-11-09 14:51:19,449 INFO  [Thread-20] hdfs.DataStreamer: Exception in createBlockOutputStream
java.io.IOException: Invalid token in javax.security.sasl.qop: DI
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:553)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:455)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:298)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
    at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1705)
    at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1655)
    at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 14:51:19,450 WARN  [Thread-20] hdfs.DataStreamer: Abandoning BP-676574098-192.168.0.81-1699408844223:blk_1073742933_2109
2023-11-09 14:51:19,458 WARN  [Thread-20] hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[192.168.0.80:1026,DS-4a4fd1ea-3c8b-4cf7-b5a4-b4ae232b65fe,DISK]
2023-11-09 14:51:19,476 INFO  [Thread-20] hdfs.DataStreamer: Exception in createBlockOutputStream
java.io.IOException: Invalid token in javax.security.sasl.qop: D
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:553)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:455)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:298)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
    at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1705)
    at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1655)
    at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 14:51:19,476 WARN  [Thread-20] hdfs.DataStreamer: Abandoning BP-676574098-192.168.0.81-1699408844223:blk_1073742934_2110
2023-11-09 14:51:19,480 WARN  [Thread-20] hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[192.168.0.81:1026,DS-d0a7f431-5561-4293-8fd3-db299c7387d9,DISK]
2023-11-09 14:51:19,492 INFO  [Thread-20] hdfs.DataStreamer: Exception in createBlockOutputStream
java.io.IOException: Invalid token in javax.security.sasl.qop: 
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:553)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:455)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:298)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
    at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1705)
    at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1655)
    at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 14:51:19,493 WARN  [Thread-20] hdfs.DataStreamer: Abandoning BP-676574098-192.168.0.81-1699408844223:blk_1073742935_2111
2023-11-09 14:51:19,496 WARN  [Thread-20] hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[192.168.0.82:1026,DS-894ec179-eaeb-4ded-8acc-c14289e5376c,DISK]
2023-11-09 14:51:19,510 WARN  [Thread-20] hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2315)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2960)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:904)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)

    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540)
    at org.apache.hadoop.ipc.Client.call(Client.java:1486)
    at org.apache.hadoop.ipc.Client.call(Client.java:1385)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
    at com.sun.proxy.$Proxy20.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:448)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
    at com.sun.proxy.$Proxy21.addBlock(Unknown Source)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361)
    at com.sun.proxy.$Proxy22.addBlock(Unknown Source)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361)
    at com.sun.proxy.$Proxy22.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1846)
    at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1645)
    at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 14:51:20,844 INFO  [zk-event-processor-pool-0] zookeeper.ZKLeaderManager: Leader change, but no new leader found
2023-11-09 14:51:20,849 INFO  [zk-event-processor-pool-0] zookeeper.ZKLeaderManager: Found new leader for znode: /hbase/tokenauth/keymaster
2023-11-09 14:51:21,342 WARN  [prometheus-http-1-3] util.FSUtils: Cluster ID file does not exist at hdfs://nameservice1/hbase/hbase.id
2023-11-09 14:51:29,553 INFO  [Thread-22] hdfs.DataStreamer: Exception in createBlockOutputStream
java.io.IOException: Invalid token in javax.security.sasl.qop: DI
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:553)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:455)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:298)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
    at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1705)
    at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1655)
    at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 14:51:29,553 WARN  [Thread-22] hdfs.DataStreamer: Abandoning BP-676574098-192.168.0.81-1699408844223:blk_1073742936_2112
2023-11-09 14:51:29,557 WARN  [Thread-22] hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[192.168.0.80:1026,DS-4a4fd1ea-3c8b-4cf7-b5a4-b4ae232b65fe,DISK]
2023-11-09 14:51:29,568 INFO  [Thread-22] hdfs.DataStreamer: Exception in createBlockOutputStream
java.io.IOException: Invalid token in javax.security.sasl.qop: D
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:553)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:455)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:298)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
    at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1705)
    at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1655)
    at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 14:51:29,568 WARN  [Thread-22] hdfs.DataStreamer: Abandoning BP-676574098-192.168.0.81-1699408844223:blk_1073742937_2113
2023-11-09 14:51:29,573 WARN  [Thread-22] hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[192.168.0.81:1026,DS-d0a7f431-5561-4293-8fd3-db299c7387d9,DISK]
2023-11-09 14:51:29,582 INFO  [Thread-22] hdfs.DataStreamer: Exception in createBlockOutputStream
java.io.IOException: Invalid token in javax.security.sasl.qop: 
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessage(DataTransferSaslUtil.java:220)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.doSaslHandshake(SaslDataTransferClient.java:553)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.getSaslStreams(SaslDataTransferClient.java:455)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.send(SaslDataTransferClient.java:298)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.checkTrustAndSend(SaslDataTransferClient.java:245)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:203)
    at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferClient.socketSend(SaslDataTransferClient.java:193)
    at org.apache.hadoop.hdfs.DataStreamer.createBlockOutputStream(DataStreamer.java:1705)
    at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1655)
    at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 14:51:29,582 WARN  [Thread-22] hdfs.DataStreamer: Abandoning BP-676574098-192.168.0.81-1699408844223:blk_1073742938_2114
2023-11-09 14:51:29,586 WARN  [Thread-22] hdfs.DataStreamer: Excluding datanode DatanodeInfoWithStorage[192.168.0.82:1026,DS-894ec179-eaeb-4ded-8acc-c14289e5376c,DISK]
2023-11-09 14:51:29,589 WARN  [Thread-22] hdfs.DataStreamer: DataStreamer Exception
org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /hbase/.tmp/hbase.version could only be written to 0 of the 1 minReplication nodes. There are 3 datanode(s) running and 3 node(s) are excluded in this operation.
    at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:2315)
    at org.apache.hadoop.hdfs.server.namenode.FSDirWriteFileOp.chooseTargetForNewBlock(FSDirWriteFileOp.java:294)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:2960)
    at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:904)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:593)
    at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:604)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:572)
    at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:556)
    at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1093)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1043)
    at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:971)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:422)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1878)
    at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2976)

    at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1540)
    at org.apache.hadoop.ipc.Client.call(Client.java:1486)
    at org.apache.hadoop.ipc.Client.call(Client.java:1385)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:118)
    at com.sun.proxy.$Proxy20.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:448)
    at jdk.internal.reflect.GeneratedMethodAccessor45.invoke(Unknown Source)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)
    at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)
    at com.sun.proxy.$Proxy21.addBlock(Unknown Source)
    at jdk.internal.reflect.GeneratedMethodAccessor45.invoke(Unknown Source)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361)
    at com.sun.proxy.$Proxy22.addBlock(Unknown Source)
    at jdk.internal.reflect.GeneratedMethodAccessor45.invoke(Unknown Source)
    at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.base/java.lang.reflect.Method.invoke(Method.java:566)
    at org.apache.hadoop.hbase.fs.HFileSystem$1.invoke(HFileSystem.java:361)
    at com.sun.proxy.$Proxy22.addBlock(Unknown Source)
    at org.apache.hadoop.hdfs.DataStreamer.locateFollowingBlock(DataStreamer.java:1846)
    at org.apache.hadoop.hdfs.DataStreamer.nextBlockOutputStream(DataStreamer.java:1645)
    at org.apache.hadoop.hdfs.DataStreamer.run(DataStreamer.java:710)
2023-11-09 14:51:36,329 WARN  [prometheus-http-1-4] util.FSUtils: Cluster ID file does not exist at hdfs://nameservice1/hbase/hbase.id
XimfengYao commented 10 months ago
2023-11-09 14:53:42,787 INFO  [main] master.HMaster: STARTING service HMaster
2023-11-09 14:53:42,788 INFO  [main] util.VersionInfo: HBase 2.4.16
2023-11-09 14:53:42,789 INFO  [main] util.VersionInfo: Source code repository git://17342ca4031d/home/zhangduo/hbase-rm/output/hbase revision=d1714710877653691e2125bd94b68a5b484a3a06
2023-11-09 14:53:42,789 INFO  [main] util.VersionInfo: Compiled by zhangduo on Wed Feb  1 09:46:35 UTC 2023
2023-11-09 14:53:42,789 INFO  [main] util.VersionInfo: From source with checksum 1ca7bcc2d1de1933beaeb5a1c380582712f11ed1bb1863308703335f7e230127010b1836d4b73df8f5a3baf6bbe4b33dbf7fcec2b28512d7acf5055d00d0c06b
2023-11-09 14:53:42,930 INFO  [main] util.ServerCommandLine: hbase.tmp.dir: /tmp/hbase-hbase
2023-11-09 14:53:42,930 INFO  [main] util.ServerCommandLine: hbase.rootdir: /hbase
2023-11-09 14:53:42,930 INFO  [main] util.ServerCommandLine: hbase.cluster.distributed: true
2023-11-09 14:53:42,930 INFO  [main] util.ServerCommandLine: hbase.zookeeper.quorum: ddh1:2181,ddh2:2181,ddh3:2181
2023-11-09 14:53:42,935 INFO  [main] util.ServerCommandLine: env:HBASE_LOGFILE=hbase-hbase-master-ddh3.log
2023-11-09 14:53:42,935 INFO  [main] util.ServerCommandLine: env:PATH=/sbin:/bin:/usr/sbin:/usr/bin
2023-11-09 14:53:42,935 INFO  [main] util.ServerCommandLine: env:HBASE_PID_DIR=/opt/datasophon/hbase-2.4.16/pid
2023-11-09 14:53:42,936 INFO  [main] util.ServerCommandLine: env:HISTSIZE=1000
2023-11-09 14:53:42,936 INFO  [main] util.ServerCommandLine: env:JAVA_HOME=/usr/lib/jvm/java-11-openjdk-11.0.20.0.8-1.el7_9.x86_64
2023-11-09 14:53:42,936 INFO  [main] util.ServerCommandLine: env:TERM=unknown
2023-11-09 14:53:42,936 INFO  [main] util.ServerCommandLine: env:LANG=zh_CN.UTF-8
2023-11-09 14:53:42,936 INFO  [main] util.ServerCommandLine: env:XDG_SESSION_ID=c32
2023-11-09 14:53:42,936 INFO  [main] util.ServerCommandLine: env:SUDO_USER=root
2023-11-09 14:53:42,936 INFO  [main] util.ServerCommandLine: env:SUDO_GID=0
2023-11-09 14:53:42,936 INFO  [main] util.ServerCommandLine: env:MAIL=/var/spool/mail/root
2023-11-09 14:53:42,936 INFO  [main] util.ServerCommandLine: env:USERNAME=hbase
2023-11-09 14:53:42,936 INFO  [main] util.ServerCommandLine: env:LOGNAME=hbase
2023-11-09 14:53:42,936 INFO  [main] util.ServerCommandLine: env:JVM_PID=19777
2023-11-09 14:53:42,936 INFO  [main] util.ServerCommandLine: env:HBASE_REST_OPTS=
2023-11-09 14:53:42,936 INFO  [main] util.ServerCommandLine: env:PWD=/opt/datasophon/hbase-2.4.16
2023-11-09 14:53:42,936 INFO  [main] util.ServerCommandLine: env:SUDO_UID=0
2023-11-09 14:53:42,936 INFO  [main] util.ServerCommandLine: env:HBASE_ROOT_LOGGER=INFO,RFA
2023-11-09 14:53:42,936 INFO  [main] util.ServerCommandLine: env:SHELL=/bin/bash
2023-11-09 14:53:42,936 INFO  [main] util.ServerCommandLine: env:HBASE_ENV_INIT=true
2023-11-09 14:53:42,937 INFO  [main] util.ServerCommandLine: env:HBASE_IDENT_STRING=hbase
2023-11-09 14:53:42,937 INFO  [main] util.ServerCommandLine: env:HBASE_ZNODE_FILE=/opt/datasophon/hbase-2.4.16/pid/hbase-hbase-master.znode
2023-11-09 14:53:42,937 INFO  [main] util.ServerCommandLine: env:HBASE_LOG_PREFIX=hbase-hbase-master-ddh3
2023-11-09 14:53:42,937 INFO  [main] util.ServerCommandLine: env:HBASE_LOG_DIR=/opt/datasophon/hbase-2.4.16/bin/../logs
2023-11-09 14:53:42,937 INFO  [main] util.ServerCommandLine: env:USER=hbase
2023-11-09 14:53:42,937 INFO  [main] util.ServerCommandLine: 1/pfl-asm-4.0.1.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/pfl-basic-4.0.1.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/pfl-basic-tools-4.0.1.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/pfl-dynamic-4.0.1.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/pfl-tf-4.0.1.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/pfl-tf-tools-4.0.1.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/policy-2.7.6.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/release-documentation-2.3.2-docbook.zip:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/saaj-impl-1.5.1.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/samples-2.3.2.zip:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/sdo-eclipselink-plugin-2.3.2.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/stax-ex-1.8.1.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/streambuffer-1.5.7.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/jdk11/txw2-2.3.2.jar:/opt/datasophon/hbase-2.4.16/bin/../lib/client-facing-thirdparty/slf4j-reload4j-1.7.33.jar
2023-11-09 14:53:42,937 INFO  [main] util.ServerCommandLine: env:SUDO_COMMAND=/bin/bash bin/hbase-daemon.sh start master
2023-11-09 14:53:42,937 INFO  [main] util.ServerCommandLine: env:HBASE_AUTOSTART_FILE=/opt/datasophon/hbase-2.4.16/pid/hbase-hbase-master.autostart
2023-11-09 14:53:42,937 INFO  [main] util.ServerCommandLine: env:SED=sed
2023-11-09 14:53:42,937 INFO  [main] util.ServerCommandLine: env:HOSTNAME=ddh3
2023-11-09 14:53:42,937 INFO  [main] util.ServerCommandLine: env:GREP=grep
2023-11-09 14:53:42,937 INFO  [main] util.ServerCommandLine: env:HBASE_NICENESS=0
2023-11-09 14:53:42,938 INFO  [main] util.ServerCommandLine: env:HBASE_OPTS= -XX:+UseConcMarkSweepGC -Djava.security.auth.login.config=/opt/datasophon/hbase-2.4.16/conf/zk-jaas.conf -Djava.util.logging.config.class=org.apache.hadoop.hbase.logging.JulToSlf4jInitializer  -javaagent:/opt/datasophon/hbase-2.4.16/bin/../jmx/jmx_prometheus_javaagent-0.16.1.jar=16100:/opt/datasophon/hbase-2.4.16/bin/../jmx/hbase_jmx_config.yaml  -Dhbase.log.dir=/opt/datasophon/hbase-2.4.16/bin/../logs -Dhbase.log.file=hbase-hbase-master-ddh3.log -Dhbase.home.dir=/opt/datasophon/hbase-2.4.16/bin/.. -Dhbase.id.str=hbase -Dhbase.root.logger=INFO,RFA -Dhbase.security.logger=INFO,RFAS
2023-11-09 14:53:42,938 INFO  [main] util.ServerCommandLine: env:HBASE_SECURITY_LOGGER=INFO,RFAS
2023-11-09 14:53:42,938 INFO  [main] util.ServerCommandLine: env:XDG_RUNTIME_DIR=/run/user/1005
2023-11-09 14:53:42,938 INFO  [main] util.ServerCommandLine: env:HBASE_THRIFT_OPTS=
2023-11-09 14:53:42,938 INFO  [main] util.ServerCommandLine: env:HBASE_HOME=/opt/datasophon/hbase-2.4.16/bin/..
2023-11-09 14:53:42,938 INFO  [main] util.ServerCommandLine: env:HOME=/home/hbase
2023-11-09 14:53:42,938 INFO  [main] util.ServerCommandLine: env:SHLVL=2
2023-11-09 14:53:42,938 INFO  [main] util.ServerCommandLine: env:MALLOC_ARENA_MAX=4
2023-11-09 14:53:42,938 INFO  [main] util.ServerCommandLine: vmName=OpenJDK 64-Bit Server VM, vmVendor=Red Hat, Inc., vmVersion=11.0.20+8-LTS
2023-11-09 14:53:42,938 INFO  [main] util.ServerCommandLine: vmInputArguments=[-Dproc_master, -XX:OnOutOfMemoryError=kill -9 %p, -XX:+UseConcMarkSweepGC, -Djava.security.auth.login.config=/opt/datasophon/hbase-2.4.16/conf/zk-jaas.conf, -Djava.util.logging.config.class=org.apache.hadoop.hbase.logging.JulToSlf4jInitializer, -javaagent:/opt/datasophon/hbase-2.4.16/bin/../jmx/jmx_prometheus_javaagent-0.16.1.jar=16100:/opt/datasophon/hbase-2.4.16/bin/../jmx/hbase_jmx_config.yaml, -Dhbase.log.dir=/opt/datasophon/hbase-2.4.16/bin/../logs, -Dhbase.log.file=hbase-hbase-master-ddh3.log, -Dhbase.home.dir=/opt/datasophon/hbase-2.4.16/bin/.., -Dhbase.id.str=hbase, -Dhbase.root.logger=INFO,RFA, -Dhbase.security.logger=INFO,RFAS]
2023-11-09 14:53:43,327 INFO  [main] metrics.MetricRegistries: Loaded MetricRegistries class org.apache.hadoop.hbase.metrics.impl.MetricRegistriesImpl
2023-11-09 14:53:43,367 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2023-11-09 14:53:43,788 INFO  [main] regionserver.RSRpcServices: master/ddh3:16000 server-side Connection retries=45
2023-11-09 14:53:43,814 INFO  [main] ipc.RpcExecutor: Instantiated default.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=3, maxQueueLength=300, handlerCount=30
2023-11-09 14:53:43,816 INFO  [main] ipc.RpcExecutor: Instantiated priority.RWQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=2, maxQueueLength=300, handlerCount=20
2023-11-09 14:53:43,816 INFO  [main] ipc.RWQueueRpcExecutor: priority.RWQ.Fifo writeQueues=1 writeHandlers=2 readQueues=1 readHandlers=18 scanQueues=0 scanHandlers=0
2023-11-09 14:53:43,816 INFO  [main] ipc.RpcExecutor: Instantiated replication.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=300, handlerCount=3
2023-11-09 14:53:43,816 INFO  [main] ipc.RpcExecutor: Instantiated metaPriority.FPBQ.Fifo with queueClass=class java.util.concurrent.LinkedBlockingQueue; numCallQueues=1, maxQueueLength=300, handlerCount=1
datasophon commented 10 months ago

Perhaps you can use hbase version 2.0.2 to replace version 2.4.16