Closed ilevd closed 1 year ago
please add -e MYSQL_PWD=<yourpassword>
to docker run command after password is changed.
@kevincai it's the first run and as far as I understand the password should be empty by default.
ok. can you please copy the fe log from the container at /data/deploy/starrocks/fe/log/fe.log
and /data/deploy/starrocks/fe/log/fe.out
need to check why FE not started correctly.
Here they are:
fe.log
2023-05-19 07:53:06,953 INFO (main|1) [StarRocksFE.start():124] StarRocks FE starting, version: 3.0.0-48f4d81d7b
2023-05-19 07:53:06,983 INFO (main|1) [FrontendOptions.initAddrUseFqdn():212] Use FQDN init local addr, FQDN: 06f20ff30289, IP: 172.17.0.2
2023-05-19 07:53:07,666 INFO (main|1) [Auth.grantRoleInternal():831] grant operator to 'root'@'%', isReplay = true
2023-05-19 07:53:07,797 INFO (main|1) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role root[-1]
2023-05-19 07:53:07,809 INFO (main|1) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role db_admin[-2]
2023-05-19 07:53:07,810 INFO (main|1) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role cluster_admin[-3]
2023-05-19 07:53:07,811 INFO (main|1) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role user_admin[-4]
2023-05-19 07:53:07,811 INFO (main|1) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role public[-5]
2023-05-19 07:53:07,811 INFO (main|1) [GlobalStateMgr.initAuth():1015] using new privilege framework..
2023-05-19 07:53:08,753 INFO (main|1) [NodeMgr.getHelperNodes():645] get helper nodes: [06f20ff30289:9010]
2023-05-19 07:53:09,044 INFO (main|1) [NodeMgr.getClusterIdAndRoleOnStartup():241] very first time to start this node. role: FOLLOWER, node name: 06f20ff30289_9010_1684482788818
2023-05-19 07:53:09,096 INFO (main|1) [NodeMgr.getClusterIdAndRoleOnStartup():438] Current run_mode is shared_nothing
2023-05-19 07:53:09,096 INFO (main|1) [NodeMgr.getClusterIdAndRoleOnStartup():445] Got cluster id: 1362302411, role: FOLLOWER, node name: 06f20ff30289_9010_1684482788818 and run_mode: shared_nothing
2023-05-19 07:53:09,099 INFO (main|1) [BDBEnvironment.ensureHelperInLocal():340] skip check local environment because helper node and local node are identical.
2023-05-19 07:53:09,188 INFO (main|1) [BDBEnvironment.setupEnvironment():269] start to setup bdb environment for 1 times
2023-05-19 07:53:12,276 WARN (MASTER 06f20ff30289_9010_1684482788818(1)|1) [BDBEnvironment.setupEnvironment():300] failed to setup environment because of UnknowMasterException for the first time, ignore it.
2023-05-19 07:53:17,277 INFO (MASTER 06f20ff30289_9010_1684482788818(1)|1) [BDBEnvironment.setupEnvironment():269] start to setup bdb environment for 2 times
2023-05-19 07:53:18,500 WARN (UNKNOWN 06f20ff30289_9010_1684482788818(-1)|1) [StateChangeExecutor.notifyNewFETypeTransfer():62] notify new FE type transfer: UNKNOWN
2023-05-19 07:53:18,624 INFO (UNKNOWN 06f20ff30289_9010_1684482788818(-1)|1) [BDBEnvironment.setupEnvironment():280] replicated environment is all set, wait for state change...
2023-05-19 07:53:28,626 INFO (UNKNOWN 06f20ff30289_9010_1684482788818(-1)|1) [BDBEnvironment.setupEnvironment():288] state change done, current role UNKNOWN
2023-05-19 07:53:28,729 ERROR (UNKNOWN 06f20ff30289_9010_1684482788818(-1)|1) [BDBEnvironment.setupEnvironment():305] failed to setup environment after retried 2 times
com.sleepycat.je.rep.ReplicaWriteException: (JE 18.3.13) Problem closing transaction 19. The current state is:UNKNOWN. The node transitioned to this state at:Fri May 19 07:53:18 UTC 2023
at com.sleepycat.je.rep.txn.ReadonlyTxn.disallowReplicaWrite(ReadonlyTxn.java:114) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.dbi.DbTree.checkReplicaWrite(DbTree.java:880) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.dbi.DbTree.doCreateDb(DbTree.java:579) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.dbi.DbTree.createDb(DbTree.java:486) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.Database.initNew(Database.java:174) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.Environment.setupDatabase(Environment.java:864) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.Environment.openDatabase(Environment.java:668) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.starrocks.journal.bdbje.BDBEnvironment.setupEnvironment(BDBEnvironment.java:291) [starrocks-fe.jar:?]
at com.starrocks.journal.bdbje.BDBEnvironment.setup(BDBEnvironment.java:175) [starrocks-fe.jar:?]
at com.starrocks.journal.bdbje.BDBEnvironment.initBDBEnvironment(BDBEnvironment.java:153) [starrocks-fe.jar:?]
at com.starrocks.journal.JournalFactory.create(JournalFactory.java:31) [starrocks-fe.jar:?]
at com.starrocks.server.GlobalStateMgr.initJournal(GlobalStateMgr.java:1039) [starrocks-fe.jar:?]
at com.starrocks.server.GlobalStateMgr.initialize(GlobalStateMgr.java:988) [starrocks-fe.jar:?]
at com.starrocks.StarRocksFE.start(StarRocksFE.java:130) [starrocks-fe.jar:?]
at com.starrocks.StarRocksFE.main(StarRocksFE.java:82) [starrocks-fe.jar:?]
2023-05-19 07:53:33,914 INFO (UNKNOWN 06f20ff30289_9010_1684482788818(-1)|1) [BDBEnvironment.setupEnvironment():269] start to setup bdb environment for 3 times
2023-05-19 07:53:33,917 WARN (UNKNOWN 06f20ff30289_9010_1684482788818(-1)|1) [StateChangeExecutor.notifyNewFETypeTransfer():62] notify new FE type transfer: UNKNOWN
2023-05-19 07:53:33,918 INFO (UNKNOWN 06f20ff30289_9010_1684482788818(-1)|1) [BDBEnvironment.setupEnvironment():280] replicated environment is all set, wait for state change...
2023-05-19 07:53:43,919 INFO (UNKNOWN 06f20ff30289_9010_1684482788818(-1)|1) [BDBEnvironment.setupEnvironment():288] state change done, current role UNKNOWN
2023-05-19 07:53:43,920 ERROR (UNKNOWN 06f20ff30289_9010_1684482788818(-1)|1) [BDBEnvironment.setupEnvironment():305] failed to setup environment after retried 3 times
com.sleepycat.je.rep.ReplicaWriteException: (JE 18.3.13) Problem closing transaction 20. The current state is:UNKNOWN. The node transitioned to this state at:Fri May 19 07:53:18 UTC 2023
at com.sleepycat.je.rep.txn.ReadonlyTxn.disallowReplicaWrite(ReadonlyTxn.java:114) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.dbi.DbTree.checkReplicaWrite(DbTree.java:880) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.dbi.DbTree.doCreateDb(DbTree.java:579) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.dbi.DbTree.createDb(DbTree.java:486) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.Database.initNew(Database.java:174) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.Environment.setupDatabase(Environment.java:864) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.Environment.openDatabase(Environment.java:668) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.starrocks.journal.bdbje.BDBEnvironment.setupEnvironment(BDBEnvironment.java:291) [starrocks-fe.jar:?]
at com.starrocks.journal.bdbje.BDBEnvironment.setup(BDBEnvironment.java:175) [starrocks-fe.jar:?]
at com.starrocks.journal.bdbje.BDBEnvironment.initBDBEnvironment(BDBEnvironment.java:153) [starrocks-fe.jar:?]
at com.starrocks.journal.JournalFactory.create(JournalFactory.java:31) [starrocks-fe.jar:?]
at com.starrocks.server.GlobalStateMgr.initJournal(GlobalStateMgr.java:1039) [starrocks-fe.jar:?]
at com.starrocks.server.GlobalStateMgr.initialize(GlobalStateMgr.java:988) [starrocks-fe.jar:?]
at com.starrocks.StarRocksFE.start(StarRocksFE.java:130) [starrocks-fe.jar:?]
at com.starrocks.StarRocksFE.main(StarRocksFE.java:82) [starrocks-fe.jar:?]
2023-05-19 07:53:43,921 ERROR (UNKNOWN 06f20ff30289_9010_1684482788818(-1)|1) [StarRocksFE.start():184] StarRocksFE start failed
com.starrocks.journal.JournalException: failed to setup environment after retried 3 times
at com.starrocks.journal.bdbje.BDBEnvironment.setupEnvironment(BDBEnvironment.java:306) ~[starrocks-fe.jar:?]
at com.starrocks.journal.bdbje.BDBEnvironment.setup(BDBEnvironment.java:175) ~[starrocks-fe.jar:?]
at com.starrocks.journal.bdbje.BDBEnvironment.initBDBEnvironment(BDBEnvironment.java:153) ~[starrocks-fe.jar:?]
at com.starrocks.journal.JournalFactory.create(JournalFactory.java:31) ~[starrocks-fe.jar:?]
at com.starrocks.server.GlobalStateMgr.initJournal(GlobalStateMgr.java:1039) ~[starrocks-fe.jar:?]
at com.starrocks.server.GlobalStateMgr.initialize(GlobalStateMgr.java:988) ~[starrocks-fe.jar:?]
at com.starrocks.StarRocksFE.start(StarRocksFE.java:130) [starrocks-fe.jar:?]
at com.starrocks.StarRocksFE.main(StarRocksFE.java:82) [starrocks-fe.jar:?]
Caused by: com.sleepycat.je.rep.ReplicaWriteException: (JE 18.3.13) Problem closing transaction 20. The current state is:UNKNOWN. The node transitioned to this state at:Fri May 19 07:53:18 UTC 2023
at com.sleepycat.je.rep.txn.ReadonlyTxn.disallowReplicaWrite(ReadonlyTxn.java:114) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.dbi.DbTree.checkReplicaWrite(DbTree.java:880) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.dbi.DbTree.doCreateDb(DbTree.java:579) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.dbi.DbTree.createDb(DbTree.java:486) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.Database.initNew(Database.java:174) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.Environment.setupDatabase(Environment.java:864) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.sleepycat.je.Environment.openDatabase(Environment.java:668) ~[starrocks-bdb-je-18.3.13.jar:?]
at com.starrocks.journal.bdbje.BDBEnvironment.setupEnvironment(BDBEnvironment.java:291) ~[starrocks-fe.jar:?]
... 7 more
fe.out
using java version 11
-Dlog4j2.formatMsgNoLookups=true -Xmx8192m -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=7 -XX:+CMSClassUnloadingEnabled -XX:-CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -Xlog:gc*:/data/deploy/starrocks/fe/log/fe.gc.log.20230519-075300:time
Fri May 19 07:53:01 UTC 2023
=====
<?xml version="1.0" encoding="utf-8"?>
<Configuration status="info" packages="com.starrocks.common">
<Appenders>
<Console name="ConsoleErr" target="SYSTEM_ERR" follow="true">
<PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS} %p (%t|%tid) [%C{1}.%M():%L] %m%n"/>
</Console>
<RollingFile name="Sys" fileName="/data/deploy/starrocks/fe/log/fe.log" filePattern="/data/deploy/starrocks/fe/log/fe.log.%d{yyyyMMdd}-%i">
<PatternLayout charset="UTF-8">
<Pattern>%d{yyyy-MM-dd HH:mm:ss,SSS} %p (%t|%tid) [%C{1}.%M():%L] %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="1024MB"/>
</Policies>
<DefaultRolloverStrategy max="10" fileIndex="min">
<Delete basePath="/data/deploy/starrocks/fe/log/" maxDepth="1" followLinks="true">
<IfFileName glob="fe.log.*" />
<IfLastModified age="7d" />
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
<RollingFile name="SysWF" fileName="/data/deploy/starrocks/fe/log/fe.warn.log" filePattern="/data/deploy/starrocks/fe/log/fe.warn.log.%d{yyyyMMdd}-%i">
<PatternLayout charset="UTF-8">
<Pattern>%d{yyyy-MM-dd HH:mm:ss,SSS} %p (%t|%tid) [%C{1}.%M():%L] %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="1024MB"/>
</Policies>
<DefaultRolloverStrategy max="10" fileIndex="min">
<Delete basePath="/data/deploy/starrocks/fe/log/" maxDepth="1" followLinks="true">
<IfFileName glob="fe.warn.log.*" />
<IfLastModified age="7d" />
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
<RollingFile name="Auditfile" fileName="/data/deploy/starrocks/fe/log/fe.audit.log" filePattern="/data/deploy/starrocks/fe/log/fe.audit.log.%d{yyyyMMdd}-%i">
<PatternLayout charset="UTF-8">
<Pattern>%d{yyyy-MM-dd HH:mm:ss,SSS} [%c{1}] %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="1024MB"/>
</Policies>
<DefaultRolloverStrategy max="10" fileIndex="min">
<Delete basePath="/data/deploy/starrocks/fe/log/" maxDepth="1" followLinks="true">
<IfFileName glob="fe.audit.log.*" />
<IfLastModified age="30d" />
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
<RollingFile name="dumpFile" fileName="/data/deploy/starrocks/fe/log/fe.dump.log" filePattern="/data/deploy/starrocks/fe/log/fe.dump.log.%d{yyyyMMdd}-%i">
<PatternLayout charset="UTF-8">
<Pattern>%d{yyyy-MM-dd HH:mm:ss,SSS} [%c{1}] %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="1024MB"/>
</Policies>
<DefaultRolloverStrategy max="10" fileIndex="min">
<Delete basePath="/data/deploy/starrocks/fe/log/" maxDepth="1" followLinks="true">
<IfFileName glob="fe.dump.log.*" />
<IfLastModified age="7d" />
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
<RollingFile name="BigQueryFile" fileName="/data/deploy/starrocks/fe/log/fe.big_query.log" filePattern="/data/deploy/starrocks/fe/log/fe.big_query.log.%d{yyyyMMdd}-%i">
<PatternLayout charset="UTF-8">
<Pattern>%d{yyyy-MM-dd HH:mm:ss,SSS} [%c{1}] %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="1024MB"/>
</Policies>
<DefaultRolloverStrategy max="10" fileIndex="min">
<Delete basePath="/data/deploy/starrocks/fe/log/" maxDepth="1" followLinks="true">
<IfFileName glob="fe.big_query.log.*" />
<IfLastModified age="7d" />
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
</Appenders>
<Loggers>
<Root level="INFO">
<AppenderRef ref="Sys"/>
<AppenderRef ref="SysWF" level="WARN"/>
</Root>
<Logger name="audit" level="ERROR" additivity="false">
<AppenderRef ref="Auditfile"/>
</Logger>
<Logger name="dump" level="ERROR" additivity="false">
<AppenderRef ref="dumpFile"/>
</Logger>
<Logger name="big_query" level="ERROR" additivity="false">
<AppenderRef ref="BigQueryFile"/>
</Logger>
<Logger name="org.apache.thrift" level="DEBUG">
<AppenderRef ref="Sys"/>
</Logger>
<Logger name="org.apache.thrift.transport" level="DEBUG">
<AppenderRef ref="Sys"/>
</Logger>
<Logger name="com.starrocks.thrift" level="DEBUG">
<AppenderRef ref="Sys"/>
</Logger>
<Logger name="org.apache.kafka" level="WARN">
<AppenderRef ref="SysWF"/>
</Logger>
<Logger name='audit.slow_query' level='INFO'/><Logger name='audit.query' level='INFO'/><Logger name='dump.query' level='INFO'/><Logger name='big_query.query' level='INFO'/>
</Loggers>
</Configuration>
=====
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data/deploy/starrocks/fe/lib/spark-dpp-1.0.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/deploy/starrocks/fe/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
[2023-05-19 07:53:18] notify new FE type transfer: UNKNOWN
[2023-05-19 07:53:33] notify new FE type transfer: UNKNOWN
Can you show us the meta/bdb/je.info.0
at that time? the time zone in je.info.0
is UTC.
2023-05-19 07:53:10.922 UTC SEVERE [06f20ff30289_9010_1684482788818] Disk usage is not within je.maxDisk or je.freeDisk limits and write operations are prohibited: maxDiskLimit=0 freeDiskLimit=5,368,709,120 adjustedMaxDiskLimit=0 maxDiskOverage=0 freeDiskShortage=2,428,547,072 diskFreeSpace=2,940,162,048 availableLogSize=-2,428,547,072 totalLogSize=1,412 activeLogSize=1,412 reservedLogSize=0 protectedLogSize=0 protectedLogSizeMap={}
2023-05-19 07:53:11.008 UTC INFO [06f20ff30289_9010_1684482788818] No server auth method
2023-05-19 07:53:11.011 UTC INFO [06f20ff30289_9010_1684482788818] Started ServiceDispatcher. HostPort=06f20ff30289:9010
2023-05-19 07:53:11.013 UTC INFO [06f20ff30289_9010_1684482788818] DataChannel factory: com.sleepycat.je.rep.utilint.net.SimpleChannelFactory
2023-05-19 07:53:11.085 UTC INFO [06f20ff30289_9010_1684482788818] DTVLSN at start:0
2023-05-19 07:53:11.087 UTC INFO [06f20ff30289_9010_1684482788818] node:NullNode(-1) state change from DETACHED to UNKNOWN
2023-05-19 07:53:11.117 UTC INFO [06f20ff30289_9010_1684482788818] Current group size: 0
2023-05-19 07:53:11.117 UTC INFO [06f20ff30289_9010_1684482788818] New node 06f20ff30289_9010_1684482788818(-1) unknown to rep group
2023-05-19 07:53:11.121 UTC INFO [06f20ff30289_9010_1684482788818] Master changed to 06f20ff30289_9010_1684482788818
2023-05-19 07:53:11.121 UTC INFO [06f20ff30289_9010_1684482788818] Nascent group. 06f20ff30289_9010_1684482788818 is master by virtue of being the first node.
2023-05-19 07:53:11.121 UTC INFO [06f20ff30289_9010_1684482788818] node:06f20ff30289_9010_1684482788818(1) state change from UNKNOWN to MASTER
2023-05-19 07:53:11.215 UTC INFO [06f20ff30289_9010_1684482788818] Successfully added node:06f20ff30289_9010_1684482788818(1) HostPort = 06f20ff30289: 9010 [ELECTABLE]
2023-05-19 07:53:11.245 UTC INFO [06f20ff30289_9010_1684482788818] Updated minimum JE group version to 7.5.0
2023-05-19 07:53:11.247 UTC INFO [06f20ff30289_9010_1684482788818] Node 06f20ff30289_9010_1684482788818 started
2023-05-19 07:53:11.248 UTC INFO [06f20ff30289_9010_1684482788818] JEErasure soft shutdown initiated.
2023-05-19 07:53:11.250 UTC INFO [06f20ff30289_9010_1684482788818] RepairReservedFiles-06f20ff30289_9010_1684482788818(-1):/data/deploy/starrocks/fe/meta/bdb soft shutdown initiated.
2023-05-19 07:53:11.250 UTC INFO [06f20ff30289_9010_1684482788818] node:NullNode(-1) state change from MASTER to UNKNOWN
2023-05-19 07:53:11.251 UTC INFO [06f20ff30289_9010_1684482788818] Node waiting for disk space to become available. Disk limit violation:Disk usage is not within je.maxDisk or je.freeDisk limits and write operations are prohibited: maxDiskLimit=0 freeDiskLimit=5,368,709,120 adjustedMaxDiskLimit=0 maxDiskOverage=0 freeDiskShortage=2,431,860,736 diskFreeSpace=2,936,848,384 availableLogSize=-2,431,860,736 totalLogSize=1,412 activeLogSize=1,412 reservedLogSize=0 protectedLogSize=0 protectedLogSizeMap={}
2023-05-19 07:53:11.254 UTC INFO [06f20ff30289_9010_1684482788818] Shutting down node 06f20ff30289_9010_1684482788818(1) DTVLSN:11
2023-05-19 07:53:11.255 UTC INFO [06f20ff30289_9010_1684482788818] Elections shutdown initiated
2023-05-19 07:53:11.256 UTC INFO [06f20ff30289_9010_1684482788818] Acceptor Thread 06f20ff30289_9010_1684482788818 soft shutdown initiated.
2023-05-19 07:53:11.257 UTC INFO [06f20ff30289_9010_1684482788818] Learner Thread 06f20ff30289_9010_1684482788818 soft shutdown initiated.
2023-05-19 07:53:11.257 UTC INFO [06f20ff30289_9010_1684482788818] Elections shutdown completed
2023-05-19 07:53:11.258 UTC INFO [06f20ff30289_9010_1684482788818] UNKNOWN 06f20ff30289_9010_1684482788818(1) soft shutdown initiated.
2023-05-19 07:53:12.252 UTC INFO [06f20ff30289_9010_1684482788818] node:06f20ff30289_9010_1684482788818(1) state change from UNKNOWN to MASTER
2023-05-19 07:53:12.253 UTC INFO [06f20ff30289_9010_1684482788818] Feeder manager accepting requests.
2023-05-19 07:53:12.255 UTC INFO [06f20ff30289_9010_1684482788818] Feeder manager soft shutdown.
2023-05-19 07:53:12.255 UTC INFO [06f20ff30289_9010_1684482788818] Feeder Manager shutting down feeders. Active and nascent feeders:0 Managed feeders:0
2023-05-19 07:53:12.256 UTC INFO [06f20ff30289_9010_1684482788818] Feeder Manager completed shutting down feeders.
2023-05-19 07:53:12.257 UTC INFO [06f20ff30289_9010_1684482788818] Feeder manager exited. CurrentTxnEnd VLSN: 11
2023-05-19 07:53:12.257 UTC INFO [06f20ff30289_9010_1684482788818] node:NullNode(-1) state change from MASTER to UNKNOWN
2023-05-19 07:53:12.257 UTC INFO [06f20ff30289_9010_1684482788818] RepNode main thread shutting down.
2023-05-19 07:53:12.258 UTC INFO [06f20ff30289_9010_1684482788818] node:NullNode(-1) state change from UNKNOWN to DETACHED
2023-05-19 07:53:12.259 UTC INFO [06f20ff30289_9010_1684482788818] RepNode main thread: DETACHED 06f20ff30289_9010_1684482788818(1) exited.
2023-05-19 07:53:12.259 UTC INFO [06f20ff30289_9010_1684482788818] Feeder Manager node: 06f20ff30289_9010_1684482788818 soft shutdown initiated.
2023-05-19 07:53:12.260 UTC INFO [06f20ff30289_9010_1684482788818] ServiceDispatcher shutdown starting. HostPort=06f20ff30289:9010 Registered services: []
2023-05-19 07:53:12.261 UTC INFO [06f20ff30289_9010_1684482788818] ServiceDispatcher-06f20ff30289:9010 soft shutdown initiated.
2023-05-19 07:53:12.263 UTC INFO [06f20ff30289_9010_1684482788818] ServiceDispatcher shutdown completed. HostPort=06f20ff30289:9010
2023-05-19 07:53:12.264 UTC INFO [06f20ff30289_9010_1684482788818] 06f20ff30289_9010_1684482788818(1) shutdown completed.
2023-05-19 07:53:12.274 UTC INFO [06f20ff30289_9010_1684482788818] JEErasure soft shutdown initiated.
2023-05-19 07:53:12.274 UTC INFO [06f20ff30289_9010_1684482788818] RepairReservedFiles-06f20ff30289_9010_1684482788818(-1):/data/deploy/starrocks/fe/meta/bdb soft shutdown initiated.
2023-05-19 07:53:18.405 UTC SEVERE [06f20ff30289_9010_1684482788818] Disk usage is not within je.maxDisk or je.freeDisk limits and write operations are prohibited: maxDiskLimit=0 freeDiskLimit=5,368,709,120 adjustedMaxDiskLimit=0 maxDiskOverage=0 freeDiskShortage=2,528,485,376 diskFreeSpace=2,840,223,744 availableLogSize=-2,528,485,376 totalLogSize=3,801 activeLogSize=3,801 reservedLogSize=0 protectedLogSize=0 protectedLogSizeMap={}
2023-05-19 07:53:18.414 UTC INFO [06f20ff30289_9010_1684482788818] No server auth method
2023-05-19 07:53:18.420 UTC INFO [06f20ff30289_9010_1684482788818] Started ServiceDispatcher. HostPort=06f20ff30289:9010
2023-05-19 07:53:18.423 UTC INFO [06f20ff30289_9010_1684482788818] DataChannel factory: com.sleepycat.je.rep.utilint.net.SimpleChannelFactory
2023-05-19 07:53:18.435 UTC INFO [06f20ff30289_9010_1684482788818] DTVLSN at start:11
2023-05-19 07:53:18.436 UTC INFO [06f20ff30289_9010_1684482788818] node:NullNode(-1) state change from DETACHED to UNKNOWN
2023-05-19 07:53:18.442 UTC INFO [06f20ff30289_9010_1684482788818] Current group size: 1
2023-05-19 07:53:18.443 UTC INFO [06f20ff30289_9010_1684482788818] Existing node 06f20ff30289_9010_1684482788818 querying for a current master.
2023-05-19 07:53:18.493 UTC INFO [06f20ff30289_9010_1684482788818] Node 06f20ff30289_9010_1684482788818 started
2023-05-19 07:53:18.494 UTC INFO [06f20ff30289_9010_1684482788818] Node waiting for disk space to become available. Disk limit violation:Disk usage is not within je.maxDisk or je.freeDisk limits and write operations are prohibited: maxDiskLimit=0 freeDiskLimit=5,368,709,120 adjustedMaxDiskLimit=0 maxDiskOverage=0 freeDiskShortage=2,529,271,808 diskFreeSpace=2,839,437,312 availableLogSize=-2,529,271,808 totalLogSize=3,801 activeLogSize=3,801 reservedLogSize=0 protectedLogSize=0 protectedLogSizeMap={}
It seems I didn't have enough disk space)
Yes, 5G at least
Still get the container finished, any ideas?
fe.log
2023-05-19 09:37:22,459 INFO (main|1) [StarRocksFE.start():124] StarRocks FE starting, version: 3.0.0-48f4d81d7b
2023-05-19 09:37:22,480 INFO (main|1) [FrontendOptions.initAddrUseFqdn():212] Use FQDN init local addr, FQDN: 49b25ba119ba, IP: 172.17.0.2
2023-05-19 09:37:23,172 INFO (main|1) [Auth.grantRoleInternal():831] grant operator to 'root'@'%', isReplay = true
2023-05-19 09:37:23,295 INFO (main|1) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role root[-1]
2023-05-19 09:37:23,304 INFO (main|1) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role db_admin[-2]
2023-05-19 09:37:23,306 INFO (main|1) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role cluster_admin[-3]
2023-05-19 09:37:23,306 INFO (main|1) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role user_admin[-4]
2023-05-19 09:37:23,306 INFO (main|1) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role public[-5]
2023-05-19 09:37:23,307 INFO (main|1) [GlobalStateMgr.initAuth():1015] using new privilege framework..
2023-05-19 09:37:23,969 INFO (main|1) [NodeMgr.getHelperNodes():645] get helper nodes: [49b25ba119ba:9010]
2023-05-19 09:37:24,220 INFO (main|1) [NodeMgr.getClusterIdAndRoleOnStartup():241] very first time to start this node. role: FOLLOWER, node name: 49b25ba119ba_9010_1684489044008
2023-05-19 09:37:24,395 INFO (main|1) [NodeMgr.getClusterIdAndRoleOnStartup():438] Current run_mode is shared_nothing
2023-05-19 09:37:24,395 INFO (main|1) [NodeMgr.getClusterIdAndRoleOnStartup():445] Got cluster id: 27254151, role: FOLLOWER, node name: 49b25ba119ba_9010_1684489044008 and run_mode: shared_nothing
2023-05-19 09:37:24,398 INFO (main|1) [BDBEnvironment.ensureHelperInLocal():340] skip check local environment because helper node and local node are identical.
2023-05-19 09:37:24,808 INFO (main|1) [BDBEnvironment.setupEnvironment():269] start to setup bdb environment for 1 times
2023-05-19 09:37:28,626 WARN (MASTER 49b25ba119ba_9010_1684489044008(1)|1) [BDBEnvironment.setupEnvironment():300] failed to setup environment because of UnknowMasterException for the first time, ignore it.
2023-05-19 09:37:33,627 INFO (MASTER 49b25ba119ba_9010_1684489044008(1)|1) [BDBEnvironment.setupEnvironment():269] start to setup bdb environment for 2 times
2023-05-19 09:37:33,981 WARN (UNKNOWN 49b25ba119ba_9010_1684489044008(-1)|1) [StateChangeExecutor.notifyNewFETypeTransfer():62] notify new FE type transfer: UNKNOWN
2023-05-19 09:37:34,045 WARN (RepNode 49b25ba119ba_9010_1684489044008(-1)|76) [StateChangeExecutor.notifyNewFETypeTransfer():62] notify new FE type transfer: LEADER
2023-05-19 09:37:34,045 INFO (UNKNOWN 49b25ba119ba_9010_1684489044008(-1)|1) [BDBEnvironment.setupEnvironment():280] replicated environment is all set, wait for state change...
2023-05-19 09:37:34,051 INFO (UNKNOWN 49b25ba119ba_9010_1684489044008(-1)|1) [BDBEnvironment.setupEnvironment():288] state change done, current role LEADER
2023-05-19 09:37:34,067 INFO (UNKNOWN 49b25ba119ba_9010_1684489044008(-1)|1) [BDBEnvironment.setupEnvironment():292] end setup bdb environment after 2 times
2023-05-19 09:37:34,079 INFO (UNKNOWN 49b25ba119ba_9010_1684489044008(-1)|1) [GlobalStateMgr.loadImage():1333] image does not exist: /data/deploy/starrocks/fe/meta/image/image.0
2023-05-19 09:37:34,081 INFO (stateChangeExecutor|84) [StateChangeExecutor.runOneCycle():85] begin to transfer FE type from INIT to UNKNOWN
2023-05-19 09:37:34,082 INFO (stateChangeExecutor|84) [StateChangeExecutor.runOneCycle():179] finished to transfer FE type to INIT
2023-05-19 09:37:34,082 INFO (stateChangeExecutor|84) [StateChangeExecutor.runOneCycle():85] begin to transfer FE type from INIT to LEADER
2023-05-19 09:37:34,085 INFO (stateChangeExecutor|84) [BDBJEJournal.open():207] the very first time to open bdb, dbname is 1
2023-05-19 09:37:34,096 INFO (stateChangeExecutor|84) [BDBHA.fencing():94] start fencing, epoch number is 1
2023-05-19 09:37:34,114 INFO (stateChangeExecutor|84) [GlobalStateMgr.replayJournal():1942] skip replay journal because 0 <= 0
2023-05-19 09:37:35,270 INFO (stateChangeExecutor|84) [GlobalStateMgr.reInitializeNewPrivilegeOnUpgrade():3679] reinitialize privilege info before upgrade
2023-05-19 09:37:35,270 WARN (stateChangeExecutor|84) [AuthenticationProviderFactory.installPlugin():32] Plugin MYSQL_NATIVE_PASSWORD has already been installed!
2023-05-19 09:37:35,271 WARN (stateChangeExecutor|84) [AuthenticationProviderFactory.installPlugin():32] Plugin AUTHENTICATION_LDAP_SIMPLE has already been installed!
2023-05-19 09:37:35,271 WARN (stateChangeExecutor|84) [AuthenticationProviderFactory.installPlugin():32] Plugin AUTHENTICATION_KERBEROS has already been installed!
2023-05-19 09:37:35,272 INFO (stateChangeExecutor|84) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role root[-1]
2023-05-19 09:37:35,272 INFO (stateChangeExecutor|84) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role db_admin[-2]
2023-05-19 09:37:35,273 INFO (stateChangeExecutor|84) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role cluster_admin[-3]
2023-05-19 09:37:35,273 INFO (stateChangeExecutor|84) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role user_admin[-4]
2023-05-19 09:37:35,273 INFO (stateChangeExecutor|84) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role public[-5]
2023-05-19 09:37:35,277 INFO (stateChangeExecutor|84) [AuthUpgrader.upgradeAsLeader():79] start to upgrade as leader.
2023-05-19 09:37:35,279 INFO (stateChangeExecutor|84) [AuthUpgrader.upgradeUser():177] ignore root entry : global priv. host: %, user: root, priv: Node_priv Admin_priv , set by resolver: false, domain user ident: null
2023-05-19 09:37:35,368 INFO (stateChangeExecutor|84) [AuthUpgrader.upgradeAsLeader():83] upgraded as leader successfully.
2023-05-19 09:37:35,368 INFO (stateChangeExecutor|84) [GlobalStateMgr.transferToLeader():1161] set usingNewPrivilege to true after transfer to leader
2023-05-19 09:37:35,370 INFO (stateChangeExecutor|84) [GlobalStateMgr.startLeaderOnlyDaemonThreads():1220] checkpointer thread started. thread id is 91
2023-05-19 09:37:35,459 INFO (stateChangeExecutor|84) [GlobalStateMgr.transferToLeader():1178] leader finished to replay journal, can write now.
2023-05-19 09:37:35,472 INFO (leaderCheckpointer|91) [BDBJEJournal.getFinalizedJournalId():275] database names: 1
2023-05-19 09:37:35,473 INFO (leaderCheckpointer|91) [Checkpoint.runAfterCatalogReady():95] checkpoint imageVersion 0, checkPointVersion 0
2023-05-19 09:37:35,519 INFO (tablet checker|34) [TabletChecker.doCheck():409] finished to check tablets. checkInPrios: true, unhealthy/total/added/in_sched/not_ready: 0/0/0/0/0, cost: 0 ms, in lock time: 0 ms
2023-05-19 09:37:35,519 INFO (tablet checker|34) [TabletChecker.doCheck():409] finished to check tablets. checkInPrios: false, unhealthy/total/added/in_sched/not_ready: 0/0/0/0/0, cost: 0 ms, in lock time: 0 ms
2023-05-19 09:37:35,528 INFO (tablet scheduler|33) [TabletScheduler.updateClusterLoadStatistic():396] update cluster load statistic:
2023-05-19 09:37:35,533 INFO (colocate group clone checker|99) [ColocateTableBalancer.matchGroup():856] finished to match colocate group. cost: 0 ms, in lock time: 0 ms
2023-05-19 09:37:35,537 INFO (tablet checker|34) [TabletChecker.runAfterCatalogReady():200] TStat :
TStat num of tablet check round: 1 (+1)
TStat cost of tablet check(ms): 0 (+0)
TStat num of tablet checked in tablet checker: 0 (+0)
TStat num of unhealthy tablet checked in tablet checker: 0 (+0)
TStat num of tablet being added to tablet scheduler: 0 (+0)
TStat num of tablet schedule round: 0 (+0)
TStat cost of tablet schedule(ms): 0 (+0)
TStat num of tablet being scheduled: 0 (+0)
TStat num of tablet being scheduled succeeded: 0 (+0)
TStat num of tablet being scheduled failed: 0 (+0)
TStat num of tablet being scheduled discard: 0 (+0)
TStat num of tablet priority upgraded: 0 (+0)
TStat num of clone task: 0 (+0)
TStat num of clone task succeeded: 0 (+0)
TStat num of clone task failed: 0 (+0)
TStat num of clone task timeout: 0 (+0)
TStat num of replica missing error: 0 (+0)
TStat num of replica version missing error: 0 (+0)
TStat num of replica unavailable error: 0 (+0)
TStat num of replica redundant error: 0 (+0)
TStat num of replica missing in cluster error: 0 (+0)
TStat num of balance scheduled: 0 (+0)
TStat num of colocate replica mismatch: 0 (+0)
TStat num of colocate replica redundant: 0 (+0)
TStat num of colocate balancer running round: 0 (+0)
2023-05-19 09:37:35,606 INFO (tablet stat mgr|26) [TabletStatMgr.updateLocalTabletStat():158] finished to get local tablet stat of all backends. cost: 0 ms
2023-05-19 09:37:35,607 INFO (tablet stat mgr|26) [TabletStatMgr.runAfterCatalogReady():126] finished to update index row num of all databases. cost: 0 ms
2023-05-19 09:37:35,607 INFO (TaskCleaner|89) [TaskManager.dropTasks():350] drop tasks:[]
2023-05-19 09:37:35,608 INFO (TaskCleaner|89) [TaskManager.removeExpiredTaskRuns():760] remove run history:[]
2023-05-19 09:37:35,576 WARN (Routine load task scheduler|40) [RoutineLoadTaskScheduler.process():117] no available be slot to scheduler tasks, wait for 10 seconds to scheduler again, you can set max_routine_load_task_num_per_be bigger in fe.conf, current value is 5
2023-05-19 09:37:35,610 INFO (stateChangeExecutor|84) [StateChangeExecutor.runOneCycle():179] finished to transfer FE type to INIT
2023-05-19 09:37:36,080 INFO (UNKNOWN 49b25ba119ba_9010_1684489044008(-1)|1) [GlobalStateMgr.waitForReady():1066] wait globalStateMgr to be ready. FE type: LEADER. is ready: true
2023-05-19 09:37:36,081 INFO (UNKNOWN 49b25ba119ba_9010_1684489044008(-1)|1) [GlobalStateMgr.waitForReady():1052] globalStateMgr is ready. FE type: LEADER
2023-05-19 09:37:37,279 INFO (UNKNOWN 49b25ba119ba_9010_1684489044008(-1)|1) [FeServer.start():65] thrift server started with port 9020.
2023-05-19 09:37:37,320 INFO (UNKNOWN 49b25ba119ba_9010_1684489044008(-1)|1) [NMysqlServer.start():90] Open mysql server success on 9030
2023-05-19 09:37:37,325 INFO (UNKNOWN 49b25ba119ba_9010_1684489044008(-1)|1) [QeService.start():78] QE service start.
2023-05-19 09:37:37,507 INFO (FE Http Server|115) [HttpServer$HttpServerThread.run():262] HttpServer started with port 8030
2023-05-19 09:37:45,611 WARN (Routine load task scheduler|40) [RoutineLoadTaskScheduler.process():117] no available be slot to scheduler tasks, wait for 10 seconds to scheduler again, you can set max_routine_load_task_num_per_be bigger in fe.conf, current value is 5
2023-05-19 09:37:55,533 INFO (colocate group clone checker|99) [ColocateTableBalancer.matchGroup():856] finished to match colocate group. cost: 0 ms, in lock time: 0 ms
2023-05-19 09:37:55,606 INFO (tablet checker|34) [TabletChecker.doCheck():409] finished to check tablets. checkInPrios: true, unhealthy/total/added/in_sched/not_ready: 0/0/0/0/0, cost: 0 ms, in lock time: 0 ms
2023-05-19 09:37:55,607 INFO (tablet checker|34) [TabletChecker.doCheck():409] finished to check tablets. checkInPrios: false, unhealthy/total/added/in_sched/not_ready: 0/0/0/0/0, cost: 0 ms, in lock time: 0 ms
2023-05-19 09:37:55,607 INFO (tablet checker|34) [TabletChecker.runAfterCatalogReady():200] TStat :
TStat num of tablet check round: 2 (+1)
TStat cost of tablet check(ms): 0 (+0)
TStat num of tablet checked in tablet checker: 0 (+0)
TStat num of unhealthy tablet checked in tablet checker: 0 (+0)
TStat num of tablet being added to tablet scheduler: 0 (+0)
TStat num of tablet schedule round: 20 (+20)
TStat cost of tablet schedule(ms): 6 (+1)
TStat num of tablet being scheduled: 0 (+0)
TStat num of tablet being scheduled succeeded: 0 (+0)
TStat num of tablet being scheduled failed: 0 (+0)
TStat num of tablet being scheduled discard: 0 (+0)
TStat num of tablet priority upgraded: 0 (+0)
TStat num of clone task: 0 (+0)
TStat num of clone task succeeded: 0 (+0)
TStat num of clone task failed: 0 (+0)
TStat num of clone task timeout: 0 (+0)
TStat num of replica missing error: 0 (+0)
TStat num of replica version missing error: 0 (+0)
TStat num of replica unavailable error: 0 (+0)
TStat num of replica redundant error: 0 (+0)
TStat num of replica missing in cluster error: 0 (+0)
TStat num of balance scheduled: 0 (+0)
TStat num of colocate replica mismatch: 0 (+0)
TStat num of colocate replica redundant: 0 (+0)
TStat num of colocate balancer running round: 0 (+0)
2023-05-19 09:37:55,612 WARN (Routine load task scheduler|40) [RoutineLoadTaskScheduler.process():117] no available be slot to scheduler tasks, wait for 10 seconds to scheduler again, you can set max_routine_load_task_num_per_be bigger in fe.conf, current value is 5
2023-05-19 09:38:00,385 INFO (starrocks-mysql-nio I/O-1|108) [AcceptListener.handleEvent():70] Connection established. remote=/172.17.0.2:51930
2023-05-19 09:38:01,581 INFO (starrocks-mysql-nio-pool-0|122) [QeProcessorImpl.registerQuery():95] register query id = d82b8a39-f628-11ed-94b4-0242ac110002
2023-05-19 09:38:01,603 WARN (starrocks-mysql-nio-pool-0|122) [CoordinatorPreprocessor.computeFragmentHosts():523] DataPartition UNPARTITIONED. Backend node not found. Check if any backend node is down.
2023-05-19 09:38:01,623 INFO (starrocks-mysql-nio-pool-0|122) [QeProcessorImpl.unregisterQuery():105] deregister query id d82b8a39-f628-11ed-94b4-0242ac110002
2023-05-19 09:38:01,624 INFO (starrocks-mysql-nio-pool-0|122) [StmtExecutor.execute():553] execute Exception, sql: select @@version_comment limit 1, error: Backend node not found. Check if any backend node is down.
2023-05-19 09:38:01,626 WARN (starrocks-mysql-nio-pool-0|122) [Coordinator.cancel():1361] cancel execution of query, this is outside invoke
2023-05-19 09:38:01,630 INFO (starrocks-mysql-nio-pool-0|122) [ResourceGroupMetricMgr.createQeuryResourceGroupMetrics():95] Add query_resource_group metric, resource group name is default_wg
2023-05-19 09:38:01,630 INFO (starrocks-mysql-nio-pool-0|122) [ResourceGroupMetricMgr.createQeuryResourceGroupMetrics():95] Add query_resource_group_err metric, resource group name is default_wg
2023-05-19 09:38:01,677 INFO (starrocks-mysql-nio-pool-0|122) [SystemInfoService.addBackend():247] finished to add Backend [id=10006, host=49b25ba119ba, heartbeatPort=9050, alive=false]
2023-05-19 09:38:02,666 INFO (tablet scheduler|33) [TabletScheduler.updateWorkingSlots():250] add new backend 10006 with slots num: 0
2023-05-19 09:38:05,613 WARN (Routine load task scheduler|40) [RoutineLoadTaskScheduler.process():117] no available be slot to scheduler tasks, wait for 10 seconds to scheduler again, you can set max_routine_load_task_num_per_be bigger in fe.conf, current value is 5
2023-05-19 09:38:06,596 WARN (heartbeat-mgr-pool-6|125) [HeartbeatMgr$BackendHeartbeatHandler.call():337] backend heartbeat got exception, addr: 49b25ba119ba:9050
org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
at org.apache.thrift.transport.TSocket.open(TSocket.java:226) ~[libthrift-0.13.0.jar:0.13.0]
at com.starrocks.common.GenericPool$ThriftClientFactory.create(GenericPool.java:144) ~[starrocks-fe.jar:?]
at com.starrocks.common.GenericPool$ThriftClientFactory.create(GenericPool.java:129) ~[starrocks-fe.jar:?]
at org.apache.commons.pool2.BaseKeyedPooledObjectFactory.makeObject(BaseKeyedPooledObjectFactory.java:62) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.create(GenericKeyedObjectPool.java:1036) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:356) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:278) ~[commons-pool2-2.3.jar:2.3]
at com.starrocks.common.GenericPool.borrowObject(GenericPool.java:101) ~[starrocks-fe.jar:?]
at com.starrocks.system.HeartbeatMgr$BackendHeartbeatHandler.call(HeartbeatMgr.java:289) [starrocks-fe.jar:?]
at com.starrocks.system.HeartbeatMgr$BackendHeartbeatHandler.call(HeartbeatMgr.java:275) [starrocks-fe.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:?]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) ~[?:?]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) ~[?:?]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) ~[?:?]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:?]
at java.net.Socket.connect(Socket.java:609) ~[?:?]
at org.apache.thrift.transport.TSocket.open(TSocket.java:221) ~[libthrift-0.13.0.jar:0.13.0]
... 13 more
2023-05-19 09:38:06,614 WARN (heartbeat mgr|32) [HeartbeatMgr.runAfterCatalogReady():177] get bad heartbeat response: type: BACKEND, status: BAD, msg: java.net.ConnectException: Connection refused (Connection refused)
2023-05-19 09:38:11,632 WARN (heartbeat-mgr-pool-0|105) [HeartbeatMgr$BackendHeartbeatHandler.call():337] backend heartbeat got exception, addr: 49b25ba119ba:9050
org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
at org.apache.thrift.transport.TSocket.open(TSocket.java:226) ~[libthrift-0.13.0.jar:0.13.0]
at com.starrocks.common.GenericPool$ThriftClientFactory.create(GenericPool.java:144) ~[starrocks-fe.jar:?]
at com.starrocks.common.GenericPool$ThriftClientFactory.create(GenericPool.java:129) ~[starrocks-fe.jar:?]
at org.apache.commons.pool2.BaseKeyedPooledObjectFactory.makeObject(BaseKeyedPooledObjectFactory.java:62) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.create(GenericKeyedObjectPool.java:1036) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:356) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:278) ~[commons-pool2-2.3.jar:2.3]
at com.starrocks.common.GenericPool.borrowObject(GenericPool.java:101) ~[starrocks-fe.jar:?]
at com.starrocks.system.HeartbeatMgr$BackendHeartbeatHandler.call(HeartbeatMgr.java:289) [starrocks-fe.jar:?]
at com.starrocks.system.HeartbeatMgr$BackendHeartbeatHandler.call(HeartbeatMgr.java:275) [starrocks-fe.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:?]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) ~[?:?]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) ~[?:?]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) ~[?:?]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:?]
at java.net.Socket.connect(Socket.java:609) ~[?:?]
at org.apache.thrift.transport.TSocket.open(TSocket.java:221) ~[libthrift-0.13.0.jar:0.13.0]
... 13 more
2023-05-19 09:38:11,634 WARN (heartbeat mgr|32) [HeartbeatMgr.runAfterCatalogReady():177] get bad heartbeat response: type: BACKEND, status: BAD, msg: java.net.ConnectException: Connection refused (Connection refused)
2023-05-19 09:38:11,694 INFO (starrocks-mysql-nio I/O-2|109) [AcceptListener.handleEvent():70] Connection established. remote=/127.0.0.1:57056
2023-05-19 09:38:11,705 INFO (starrocks-mysql-nio-pool-0|122) [QeProcessorImpl.registerQuery():95] register query id = dee41efb-f628-11ed-94b4-0242ac110002
2023-05-19 09:38:11,706 WARN (starrocks-mysql-nio-pool-0|122) [CoordinatorPreprocessor.computeFragmentHosts():523] DataPartition UNPARTITIONED. Backend node not found. Check if any backend node is down.
2023-05-19 09:38:11,706 INFO (starrocks-mysql-nio-pool-0|122) [QeProcessorImpl.unregisterQuery():105] deregister query id dee41efb-f628-11ed-94b4-0242ac110002
2023-05-19 09:38:11,707 INFO (starrocks-mysql-nio-pool-0|122) [StmtExecutor.execute():553] execute Exception, sql: select @@version_comment limit 1, error: Backend node not found. Check if any backend node is down.backend: [49b25ba119ba alive: false inBlacklist: false]
2023-05-19 09:38:11,707 WARN (starrocks-mysql-nio-pool-0|122) [Coordinator.cancel():1361] cancel execution of query, this is outside invoke
2023-05-19 09:38:11,728 INFO (starrocks-mysql-nio-pool-0|122) [BackendsProcDir.getClusterBackendInfos():207] backends proc get tablet num cost: 1, total cost: 5
fe.out
using java version 11
-Dlog4j2.formatMsgNoLookups=true -Xmx8192m -XX:SurvivorRatio=8 -XX:MaxTenuringThreshold=7 -XX:+CMSClassUnloadingEnabled -XX:-CMSParallelRemarkEnabled -XX:CMSInitiatingOccupancyFraction=80 -XX:SoftRefLRUPolicyMSPerMB=0 -Xlog:gc*:/data/deploy/starrocks/fe/log/fe.gc.log.20230519-093718:time
Fri May 19 09:37:19 UTC 2023
=====
<?xml version="1.0" encoding="utf-8"?>
<Configuration status="info" packages="com.starrocks.common">
<Appenders>
<Console name="ConsoleErr" target="SYSTEM_ERR" follow="true">
<PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss,SSS} %p (%t|%tid) [%C{1}.%M():%L] %m%n"/>
</Console>
<RollingFile name="Sys" fileName="/data/deploy/starrocks/fe/log/fe.log" filePattern="/data/deploy/starrocks/fe/log/fe.log.%d{yyyyMMdd}-%i">
<PatternLayout charset="UTF-8">
<Pattern>%d{yyyy-MM-dd HH:mm:ss,SSS} %p (%t|%tid) [%C{1}.%M():%L] %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="1024MB"/>
</Policies>
<DefaultRolloverStrategy max="10" fileIndex="min">
<Delete basePath="/data/deploy/starrocks/fe/log/" maxDepth="1" followLinks="true">
<IfFileName glob="fe.log.*" />
<IfLastModified age="7d" />
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
<RollingFile name="SysWF" fileName="/data/deploy/starrocks/fe/log/fe.warn.log" filePattern="/data/deploy/starrocks/fe/log/fe.warn.log.%d{yyyyMMdd}-%i">
<PatternLayout charset="UTF-8">
<Pattern>%d{yyyy-MM-dd HH:mm:ss,SSS} %p (%t|%tid) [%C{1}.%M():%L] %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="1024MB"/>
</Policies>
<DefaultRolloverStrategy max="10" fileIndex="min">
<Delete basePath="/data/deploy/starrocks/fe/log/" maxDepth="1" followLinks="true">
<IfFileName glob="fe.warn.log.*" />
<IfLastModified age="7d" />
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
<RollingFile name="Auditfile" fileName="/data/deploy/starrocks/fe/log/fe.audit.log" filePattern="/data/deploy/starrocks/fe/log/fe.audit.log.%d{yyyyMMdd}-%i">
<PatternLayout charset="UTF-8">
<Pattern>%d{yyyy-MM-dd HH:mm:ss,SSS} [%c{1}] %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="1024MB"/>
</Policies>
<DefaultRolloverStrategy max="10" fileIndex="min">
<Delete basePath="/data/deploy/starrocks/fe/log/" maxDepth="1" followLinks="true">
<IfFileName glob="fe.audit.log.*" />
<IfLastModified age="30d" />
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
<RollingFile name="dumpFile" fileName="/data/deploy/starrocks/fe/log/fe.dump.log" filePattern="/data/deploy/starrocks/fe/log/fe.dump.log.%d{yyyyMMdd}-%i">
<PatternLayout charset="UTF-8">
<Pattern>%d{yyyy-MM-dd HH:mm:ss,SSS} [%c{1}] %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="1024MB"/>
</Policies>
<DefaultRolloverStrategy max="10" fileIndex="min">
<Delete basePath="/data/deploy/starrocks/fe/log/" maxDepth="1" followLinks="true">
<IfFileName glob="fe.dump.log.*" />
<IfLastModified age="7d" />
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
<RollingFile name="BigQueryFile" fileName="/data/deploy/starrocks/fe/log/fe.big_query.log" filePattern="/data/deploy/starrocks/fe/log/fe.big_query.log.%d{yyyyMMdd}-%i">
<PatternLayout charset="UTF-8">
<Pattern>%d{yyyy-MM-dd HH:mm:ss,SSS} [%c{1}] %m%n</Pattern>
</PatternLayout>
<Policies>
<TimeBasedTriggeringPolicy/>
<SizeBasedTriggeringPolicy size="1024MB"/>
</Policies>
<DefaultRolloverStrategy max="10" fileIndex="min">
<Delete basePath="/data/deploy/starrocks/fe/log/" maxDepth="1" followLinks="true">
<IfFileName glob="fe.big_query.log.*" />
<IfLastModified age="7d" />
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
</Appenders>
<Loggers>
<Root level="INFO">
<AppenderRef ref="Sys"/>
<AppenderRef ref="SysWF" level="WARN"/>
</Root>
<Logger name="audit" level="ERROR" additivity="false">
<AppenderRef ref="Auditfile"/>
</Logger>
<Logger name="dump" level="ERROR" additivity="false">
<AppenderRef ref="dumpFile"/>
</Logger>
<Logger name="big_query" level="ERROR" additivity="false">
<AppenderRef ref="BigQueryFile"/>
</Logger>
<Logger name="org.apache.thrift" level="DEBUG">
<AppenderRef ref="Sys"/>
</Logger>
<Logger name="org.apache.thrift.transport" level="DEBUG">
<AppenderRef ref="Sys"/>
</Logger>
<Logger name="com.starrocks.thrift" level="DEBUG">
<AppenderRef ref="Sys"/>
</Logger>
<Logger name="org.apache.kafka" level="WARN">
<AppenderRef ref="SysWF"/>
</Logger>
<Logger name='audit.slow_query' level='INFO'/><Logger name='audit.query' level='INFO'/><Logger name='dump.query' level='INFO'/><Logger name='big_query.query' level='INFO'/>
</Loggers>
</Configuration>
=====
WARNING: sun.reflect.Reflection.getCallerClass is not supported. This will impact performance.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data/deploy/starrocks/fe/lib/spark-dpp-1.0.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/data/deploy/starrocks/fe/lib/log4j-slf4j-impl-2.17.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
[2023-05-19 09:37:33] notify new FE type transfer: UNKNOWN
[2023-05-19 09:37:34] notify new FE type transfer: LEADER
[2023-05-19 09:37:35] leader finished to replay journal, can write now.
je.info.0
2023-05-19 09:37:25.773 UTC INFO [49b25ba119ba_9010_1684489044008] No server auth method
2023-05-19 09:37:25.780 UTC INFO [49b25ba119ba_9010_1684489044008] Started ServiceDispatcher. HostPort=49b25ba119ba:9010
2023-05-19 09:37:25.793 UTC INFO [49b25ba119ba_9010_1684489044008] DataChannel factory: com.sleepycat.je.rep.utilint.net.SimpleChannelFactory
2023-05-19 09:37:25.850 UTC INFO [49b25ba119ba_9010_1684489044008] DTVLSN at start:0
2023-05-19 09:37:25.852 UTC INFO [49b25ba119ba_9010_1684489044008] node:NullNode(-1) state change from DETACHED to UNKNOWN
2023-05-19 09:37:25.887 UTC INFO [49b25ba119ba_9010_1684489044008] Current group size: 0
2023-05-19 09:37:25.887 UTC INFO [49b25ba119ba_9010_1684489044008] New node 49b25ba119ba_9010_1684489044008(-1) unknown to rep group
2023-05-19 09:37:25.889 UTC INFO [49b25ba119ba_9010_1684489044008] Master changed to 49b25ba119ba_9010_1684489044008
2023-05-19 09:37:25.890 UTC INFO [49b25ba119ba_9010_1684489044008] Nascent group. 49b25ba119ba_9010_1684489044008 is master by virtue of being the first node.
2023-05-19 09:37:25.890 UTC INFO [49b25ba119ba_9010_1684489044008] node:49b25ba119ba_9010_1684489044008(1) state change from UNKNOWN to MASTER
2023-05-19 09:37:28.562 UTC INFO [49b25ba119ba_9010_1684489044008] Successfully added node:49b25ba119ba_9010_1684489044008(1) HostPort = 49b25ba119ba: 9010 [ELECTABLE]
2023-05-19 09:37:28.578 UTC INFO [49b25ba119ba_9010_1684489044008] Updated minimum JE group version to 7.5.0
2023-05-19 09:37:28.580 UTC INFO [49b25ba119ba_9010_1684489044008] Node 49b25ba119ba_9010_1684489044008 started
2023-05-19 09:37:28.581 UTC INFO [49b25ba119ba_9010_1684489044008] node:NullNode(-1) state change from MASTER to UNKNOWN
2023-05-19 09:37:28.582 UTC INFO [49b25ba119ba_9010_1684489044008] node:49b25ba119ba_9010_1684489044008(1) state change from UNKNOWN to MASTER
2023-05-19 09:37:28.583 UTC INFO [49b25ba119ba_9010_1684489044008] JEErasure soft shutdown initiated.
2023-05-19 09:37:28.587 UTC INFO [49b25ba119ba_9010_1684489044008] Feeder manager accepting requests.
2023-05-19 09:37:28.593 UTC INFO [49b25ba119ba_9010_1684489044008] RepairReservedFiles-49b25ba119ba_9010_1684489044008(-1):/data/deploy/starrocks/fe/meta/bdb soft shutdown initiated.
2023-05-19 09:37:28.598 UTC INFO [49b25ba119ba_9010_1684489044008] Shutting down node 49b25ba119ba_9010_1684489044008(1) DTVLSN:11
2023-05-19 09:37:28.599 UTC INFO [49b25ba119ba_9010_1684489044008] Elections shutdown initiated
2023-05-19 09:37:28.599 UTC INFO [49b25ba119ba_9010_1684489044008] Acceptor Thread 49b25ba119ba_9010_1684489044008 soft shutdown initiated.
2023-05-19 09:37:28.600 UTC INFO [49b25ba119ba_9010_1684489044008] Learner Thread 49b25ba119ba_9010_1684489044008 soft shutdown initiated.
2023-05-19 09:37:28.602 UTC INFO [49b25ba119ba_9010_1684489044008] Elections shutdown completed
2023-05-19 09:37:28.603 UTC INFO [49b25ba119ba_9010_1684489044008] Feeder manager soft shutdown.
2023-05-19 09:37:28.604 UTC INFO [49b25ba119ba_9010_1684489044008] MASTER 49b25ba119ba_9010_1684489044008(1) soft shutdown initiated.
2023-05-19 09:37:28.604 UTC INFO [49b25ba119ba_9010_1684489044008] Feeder Manager shutting down feeders. Active and nascent feeders:0 Managed feeders:0
2023-05-19 09:37:28.605 UTC INFO [49b25ba119ba_9010_1684489044008] Feeder Manager completed shutting down feeders.
2023-05-19 09:37:28.606 UTC INFO [49b25ba119ba_9010_1684489044008] Feeder manager exited. CurrentTxnEnd VLSN: 11
2023-05-19 09:37:28.606 UTC INFO [49b25ba119ba_9010_1684489044008] node:NullNode(-1) state change from MASTER to UNKNOWN
2023-05-19 09:37:28.607 UTC INFO [49b25ba119ba_9010_1684489044008] RepNode main thread shutting down.
2023-05-19 09:37:28.609 UTC INFO [49b25ba119ba_9010_1684489044008] node:NullNode(-1) state change from UNKNOWN to DETACHED
2023-05-19 09:37:28.611 UTC INFO [49b25ba119ba_9010_1684489044008] RepNode main thread: DETACHED 49b25ba119ba_9010_1684489044008(1) exited.
2023-05-19 09:37:28.612 UTC INFO [49b25ba119ba_9010_1684489044008] Feeder Manager node: 49b25ba119ba_9010_1684489044008 soft shutdown initiated.
2023-05-19 09:37:28.613 UTC INFO [49b25ba119ba_9010_1684489044008] ServiceDispatcher shutdown starting. HostPort=49b25ba119ba:9010 Registered services: []
2023-05-19 09:37:28.613 UTC INFO [49b25ba119ba_9010_1684489044008] ServiceDispatcher-49b25ba119ba:9010 soft shutdown initiated.
2023-05-19 09:37:28.615 UTC INFO [49b25ba119ba_9010_1684489044008] ServiceDispatcher shutdown completed. HostPort=49b25ba119ba:9010
2023-05-19 09:37:28.615 UTC INFO [49b25ba119ba_9010_1684489044008] 49b25ba119ba_9010_1684489044008(1) shutdown completed.
2023-05-19 09:37:28.623 UTC INFO [49b25ba119ba_9010_1684489044008] JEErasure soft shutdown initiated.
2023-05-19 09:37:28.624 UTC INFO [49b25ba119ba_9010_1684489044008] RepairReservedFiles-49b25ba119ba_9010_1684489044008(-1):/data/deploy/starrocks/fe/meta/bdb soft shutdown initiated.
2023-05-19 09:37:33.890 UTC INFO [49b25ba119ba_9010_1684489044008] No server auth method
2023-05-19 09:37:33.898 UTC INFO [49b25ba119ba_9010_1684489044008] Started ServiceDispatcher. HostPort=49b25ba119ba:9010
2023-05-19 09:37:33.913 UTC INFO [49b25ba119ba_9010_1684489044008] DataChannel factory: com.sleepycat.je.rep.utilint.net.SimpleChannelFactory
2023-05-19 09:37:33.919 UTC INFO [49b25ba119ba_9010_1684489044008] DTVLSN at start:11
2023-05-19 09:37:33.919 UTC INFO [49b25ba119ba_9010_1684489044008] node:NullNode(-1) state change from DETACHED to UNKNOWN
2023-05-19 09:37:33.928 UTC INFO [49b25ba119ba_9010_1684489044008] Current group size: 1
2023-05-19 09:37:33.928 UTC INFO [49b25ba119ba_9010_1684489044008] Existing node 49b25ba119ba_9010_1684489044008 querying for a current master.
2023-05-19 09:37:33.969 UTC INFO [49b25ba119ba_9010_1684489044008] Node 49b25ba119ba_9010_1684489044008 started
2023-05-19 09:37:33.977 UTC INFO [49b25ba119ba_9010_1684489044008] Election initiated; election #1
2023-05-19 09:37:33.984 UTC INFO [49b25ba119ba_9010_1684489044008] Started election thread Fri May 19 09:37:33 UTC 2023
2023-05-19 09:37:34.024 UTC INFO [49b25ba119ba_9010_1684489044008] Winning proposal: Proposal(00000188335f2721:9f51e566140770885889196249bcf0b1:00000003) Value: Value:49b25ba119ba$$$9010$$$49b25ba119ba_9010_1684489044008$$$1
2023-05-19 09:37:34.033 UTC INFO [49b25ba119ba_9010_1684489044008] Master changed to 49b25ba119ba_9010_1684489044008
2023-05-19 09:37:34.034 UTC INFO [49b25ba119ba_9010_1684489044008] Election finished. Elapsed time: 57ms
2023-05-19 09:37:34.035 UTC INFO [49b25ba119ba_9010_1684489044008] node:49b25ba119ba_9010_1684489044008(1) state change from UNKNOWN to MASTER
2023-05-19 09:37:34.035 UTC INFO [49b25ba119ba_9010_1684489044008] Election thread exited. Group master: 49b25ba119ba_9010_1684489044008(1)
2023-05-19 09:37:34.050 UTC INFO [49b25ba119ba_9010_1684489044008] Feeder manager accepting requests.
this time it is the backend not started correctly, can you provide /data/starrocks/be/log/be.out
and /data/starrocks/be/log/be.INFO
?
be.out
start time: Fri May 19 09:37:45 UTC 2023
and there is no be.INFO
ah, the right path is /data/deploy/starrocks/be/log/be.out
and /data/deploy/starrocks/be/log/be.INFO
, can you check again?
b.t.w. how much resource is allocated to the container?
Yes, I checked /data/deploy/starrocks/be/log
and there is not be.INFO
not too much. I use pc with 8 gb ram and 4 core, so maybe it's the deal.
In be.out only one string:
start time: Fri May 19 10:16:22 UTC 2023
So maybe I need to increase some timeout?
interesting, looks like the BE is not running in time, so the start script cannot wait and exit.
@ilevd
So in order to confirm if this is caused by too slow starting of the BE process, can you please do me a favor to follow the below steps for a validation?
bash
sudo docker run -p 9030:9030 -p 8030:8030 -p 8040:8040 -it registry.starrocks.io/starrocks/allin1-ubuntu bash
following the below diff content, edit /data/deploy/starrocks/start_fe_be.sh
script
--- start_fe_be.sh.origin 2023-05-19 12:42:30.928263692 +0000
+++ start_fe_be.sh 2023-05-19 12:42:09.022432530 +0000
@@ -17,7 +17,7 @@
# Sleep until the cluster starts.
-sleep 15;
+sleep 60;
# Fetch fqdn with the command suggested by AWS official doc: https://docs.aws.amazon.com/managedservices/latest/userguide/find-FQDN.html
MYFQDN=`hostname --fqdn`
you can change the sleep time whatever proper, so that in next step, it can actually wait long enough until BE's alive
./start_fe_be.sh
If this is the case, we will improve the script to allow sleep longer than current fixed 15 seconds.
Just tried to manually start starrocks with https://docs.starrocks.io/en-us/latest/deployment/deploy_manually#step-5-set-up-the-cluster and I get:
sudo ./be/bin/start_be.sh
./be/bin/start_be.sh: line 24: 2138186 Illegal instruction (core dumped) bash ${STARROCKS_HOME}/bin/start_backend.sh --be $@
and I again get only 1 string in be.out log:
start time: Пт 19 мая 2023 15:47:26 MSK
So it seems my pc doesn't support something.
usually Illegal instruction
means lack of required CPU instruction support.
can you provide the CPU info by cat /proc/cpuinfo
and also uname -a
to check the kernel version.
Increasing sleep time didn't help.
$ cat /proc/cpuinfo
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Core(TM)2 Quad CPU Q8200 @ 2.33GHz
stepping : 10
microcode : 0xa0b
cpu MHz : 2238.481
cache size : 2048 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm pti dtherm
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown
bogomips : 4662.21
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 1
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Core(TM)2 Quad CPU Q8200 @ 2.33GHz
stepping : 10
microcode : 0xa0b
cpu MHz : 2229.613
cache size : 2048 KB
physical id : 0
siblings : 4
core id : 1
cpu cores : 4
apicid : 1
initial apicid : 1
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm pti dtherm
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown
bogomips : 4662.21
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 2
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Core(TM)2 Quad CPU Q8200 @ 2.33GHz
stepping : 10
microcode : 0xa0b
cpu MHz : 2283.370
cache size : 2048 KB
physical id : 0
siblings : 4
core id : 2
cpu cores : 4
apicid : 2
initial apicid : 2
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm pti dtherm
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown
bogomips : 4662.21
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 23
model name : Intel(R) Core(TM)2 Quad CPU Q8200 @ 2.33GHz
stepping : 10
microcode : 0xa0b
cpu MHz : 2245.660
cache size : 2048 KB
physical id : 0
siblings : 4
core id : 3
cpu cores : 4
apicid : 3
initial apicid : 3
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl cpuid aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm pti dtherm
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs itlb_multihit mmio_unknown
bogomips : 4662.21
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:
$ uname -a
Linux igor-PC 5.4.0-146-generic #163-Ubuntu SMP Fri Mar 17 18:26:02 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
official image is built with avx2
instruction required. seems your CPU doesn't have this. Need rebuild from source code with AVX2 disabled.
I built a private image lvlouisaslia/allin1-ubuntu:3.0.0-no-avx2
with AVX2
disabled, you can have a try if this works for you.
@ilevd
Thank you, now it's much better, but trying sevaral times, the connection got lost evenutally:
sudo docker run -p 9030:9030 -p 8030:8030 -p 8040:8040 -it lvlouisaslia/allin1-ubuntu:3.0.0-no-avx2
[Sat May 20 08:03:55 UTC 2023] Start FE
[Sat May 20 08:03:56 UTC 2023] Start BE
[Sat May 20 08:04:17 UTC 2023] Register BE 690bdb80db59 to FE
[Sat May 20 08:04:37 UTC 2023] Alive: true
[Sat May 20 08:04:47 UTC 2023] Alive: true
[Sat May 20 08:04:57 UTC 2023] Alive: true
[Sat May 20 08:05:07 UTC 2023] Alive: true
[Sat May 20 08:05:17 UTC 2023] Alive: true
[Sat May 20 08:05:27 UTC 2023] service has exited
StarRocks > SHOW PROC '/frontends';
+---------------------------------+--------------+-------------+----------+-----------+---------+--------+-----------+------+-------+-------------------+---------------------+----------+--------+---------------------+------------------+
| Name | IP | EditLogPort | HttpPort | QueryPort | RpcPort | Role | ClusterId | Join | Alive | ReplayedJournalId | LastHeartbeat | IsHelper | ErrMsg | StartTime | Version |
+---------------------------------+--------------+-------------+----------+-----------+---------+--------+-----------+------+-------+-------------------+---------------------+----------+--------+---------------------+------------------+
| 690bdb80db59_9010_1684569844850 | 690bdb80db59 | 9010 | 8030 | 9030 | 9020 | LEADER | 336538723 | true | true | 24 | 2023-05-20 08:04:56 | true | | 2023-05-20 08:04:12 | 3.0.0-48f4d81d7b |
+---------------------------------+--------------+-------------+----------+-----------+---------+--------+-----------+------+-------+-------------------+---------------------+----------+--------+---------------------+------------------+
1 row in set (0,07 sec)
StarRocks > SHOW PROC '/backends';
ERROR 2013 (HY000): Lost connection to MySQL server during query
No connection. Trying to reconnect...
ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1:9030' (111)
ERROR:
Can't connect to the server
be.out
start time: Sat May 20 08:04:02 UTC 2023
3.0.0 RELEASE (build 48f4d81d7b)
query_id:00000000-0000-0000-0000-000000000000, fragment_instance:00000000-0000-0000-0000-000000000000
tracker:process consumption: 5693944
tracker:query_pool consumption: 0
tracker:load consumption: 0
tracker:metadata consumption: 0
tracker:tablet_metadata consumption: 0
tracker:rowset_metadata consumption: 0
tracker:segment_metadata consumption: 0
tracker:column_metadata consumption: 0
tracker:tablet_schema consumption: 0
tracker:segment_zonemap consumption: 0
tracker:short_key_index consumption: 0
tracker:column_zonemap_index consumption: 0
tracker:ordinal_index consumption: 0
tracker:bitmap_index consumption: 0
tracker:bloom_filter_index consumption: 0
tracker:compaction consumption: 0
tracker:schema_change consumption: 0
tracker:column_pool consumption: 0
tracker:page_cache consumption: 0
tracker:update consumption: 0
tracker:chunk_allocator consumption: 0
tracker:clone consumption: 0
tracker:consistency consumption: 0
*** Aborted at 1684569901 (unix time) try "date -d @1684569901" if you are using GNU date ***
PC: @ 0x301e574 bvar::detail::WindowBase<>::SeriesSampler::take_sample()
*** SIGILL (@0x301e574) received by PID 1436 (TID 0x7f06d2bff640) from PID 50455924; stack trace: ***
@ 0x62f2fea (unknown)
@ 0x7f06d461a520 (unknown)
@ 0x301e574 bvar::detail::WindowBase<>::SeriesSampler::take_sample()
@ 0x643ec49 bvar::detail::SamplerCollector::run()
@ 0x644097d bvar::detail::SamplerCollector::sampling_thread()
@ 0x7f06d466cb43 (unknown)
@ 0x7f06d46fdbb4 clone
@ 0x0 (unknown)
be.warning.log.*
W0520 08:04:21.345201 2023 heartbeat_server.cpp:161] 690bdb80db59 not equal to to backend localhost 172.17.0.2
W0520 08:04:47.269220 1648 exec_state_reporter.cpp:142] Retrying ReportExecStatus: No more data to read.
W0520 08:04:57.299612 1648 exec_state_reporter.cpp:142] Retrying ReportExecStatus: No more data to read.
be.info.log.*
I0520 08:04:02.611377 1436 daemon.cpp:290] version 3.0.0 RELEASE (build 48f4d81d7b)
Built on 2023-05-19 13:36:25 by caixiaohua@sandbox-cloudnative
I0520 08:04:02.612676 1436 mem_info.cpp:151] Init mem info by container's cgroup config, physical_mem=9223372036854771712
I0520 08:04:02.612746 1436 mem_info.cpp:104] Physical Memory: 7.77 GB
I0520 08:04:02.612763 1436 daemon.cpp:296] Cpu Info:
Model: Intel(R) Core(TM)2 Quad CPU Q8200 @ 2.33GHz
Cores: 4
Max Possible Cores: 4
L1 Cache: 32.00 KB (Line: 64.00 B)
L2 Cache: 2.00 MB (Line: 64.00 B)
L3 Cache: 0 (Line: 0)
Hardware Supports:
ssse3
sse4_1
Numa Nodes: 1
Numa Nodes of Cores: 0->0 | 1->0 | 2->0 | 3->0 |
I0520 08:04:02.612802 1436 daemon.cpp:297] Disk Info:
Num disks 4: loop, sda, sdb, sdc
I0520 08:04:02.612809 1436 daemon.cpp:298] Mem Info: 7.77 GB
I0520 08:04:02.918056 1436 daemon.cpp:273] Minidump is disabled
I0520 08:04:02.918462 1436 backend_options.cpp:77] localhost 172.17.0.2
I0520 08:04:02.918576 1436 exec_env.cpp:488] Set storage page cache size 1351617601
I0520 08:04:02.923027 1446 daemon.cpp:201] Current memory statistics: process(0), query_pool(0), load(0), metadata(0), compaction(0), schema_change(0), column_pool(0), page_cache(0), update(0), chunk_allocator(0), clone(0), consistency(0)
I0520 08:04:02.926563 1448 data_dir.cpp:126] path: /data/deploy/starrocks/be/storage, hash: 5987063680716520938
I0520 08:04:02.981719 1486 data_dir.cpp:250] start to load tablets from /data/deploy/starrocks/be/storage
I0520 08:04:02.981748 1486 data_dir.cpp:256] begin loading rowset from meta
I0520 08:04:02.982455 1486 data_dir.cpp:274] load rowset from meta finished, data dir: /data/deploy/starrocks/be/storage
I0520 08:04:02.982463 1486 data_dir.cpp:279] begin loading tablet from meta
I0520 08:04:02.982475 1486 data_dir.cpp:330] load tablet from meta finished, loaded tablet: 0, error tablet: 0, path: /data/deploy/starrocks/be/storage
I0520 08:04:02.999365 1436 exec_env.cpp:222] [PIPELINE] Exec thread pool: thread_num=4
I0520 08:04:03.000453 1571 fragment_mgr.cpp:529] FragmentMgr cancel worker start working.
I0520 08:04:03.132320 1740 result_buffer_mgr.cpp:145] result buffer manager cancel thread begin.
I0520 08:04:03.132479 1739 profile_report_worker.cpp:111] ProfileReportWorker start working.
I0520 08:04:03.132700 1737 runtime_filter_worker.cpp:776] RuntimeFilterWorker start working.
I0520 08:04:03.133044 1436 load_path_mgr.cpp:68] Load path configured to [/data/deploy/starrocks/be/storage/mini_download]
I0520 08:04:03.143473 1833 olap_server.cpp:748] begin to do tablet meta checkpoint:/data/deploy/starrocks/be/storage
I0520 08:04:03.143688 1836 olap_server.cpp:698] try to perform path gc by tablet!
I0520 08:04:03.143784 1436 olap_server.cpp:225] All backgroud threads of storage engine have started.
I0520 08:04:03.144086 1830 storage_engine.cpp:649] start to check compaction
I0520 08:04:03.144160 1829 compaction_manager.cpp:69] start compaction scheduler
I0520 08:04:03.144194 1436 thrift_server.cpp:388] heartbeat has started listening port on 9050
I0520 08:04:03.147325 1436 starlet_server.cc:58] Starlet grpc server started on 0.0.0.0:9070
I0520 08:04:03.147456 1436 backend_base.cpp:79] StarRocksInternalService has started listening port on 9060
I0520 08:04:03.147498 1847 starlet.cc:83] Empty starmanager address, skip reporting!
I0520 08:04:03.147863 1436 thrift_server.cpp:388] BackendService has started listening port on 9060
I0520 08:04:03.158800 1436 server.cpp:1070] Server[starrocks::BackendInternalServiceImpl<starrocks::PInternalService>+starrocks::LakeServiceImpl+starrocks::BackendInternalServiceImpl<doris::PBackendService>] is serving on port=8060.
I0520 08:04:03.158835 1436 server.cpp:1073] Check out http://690bdb80db59:8060 in web browser.
I0520 08:04:13.157253 1847 starlet.cc:83] Empty starmanager address, skip reporting!
I0520 08:04:17.926084 1446 daemon.cpp:201] Current memory statistics: process(4758552), query_pool(0), load(0), metadata(0), compaction(0), schema_change(0), column_pool(0), page_cache(0), update(0), chunk_allocator(0), clone(0), consistency(0)
I0520 08:04:21.345088 2023 heartbeat_server.cpp:76] get heartbeat from FE.host:690bdb80db59, port:9020, cluster id:336538723, counter:1
W0520 08:04:21.345201 2023 heartbeat_server.cpp:161] 690bdb80db59 not equal to to backend localhost 172.17.0.2
I0520 08:04:21.345881 2023 heartbeat_server.cpp:201] update localhost done, the new localhost is 690bdb80db59
I0520 08:04:21.345893 2023 heartbeat_server.cpp:207] Received first heartbeat. updating cluster id
I0520 08:04:21.346112 2023 heartbeat_server.cpp:98] Updating master info: TMasterInfo(network_address=TNetworkAddress(hostname=690bdb80db59, port=9020), cluster_id=336538723, epoch=1, token=0cc20c4a-4f56-4c3a-9dd5-848ab02d01c9, backend_ip=690bdb80db59, http_port=8030, heartbeat_flags=0, backend_id=10006, min_active_txn_id=2)
I0520 08:04:21.346143 2023 heartbeat_server.cpp:103] Master FE is changed or restarted. report tablet and disk info immediately
I0520 08:04:22.144431 1818 tablet_manager.cpp:834] Report all 0 tablets info
I0520 08:04:22.183506 1818 tablet_manager.cpp:834] Report all 0 tablets info
I0520 08:04:23.168243 1847 starlet.cc:83] Empty starmanager address, skip reporting!
I0520 08:04:32.929318 1446 daemon.cpp:201] Current memory statistics: process(4758552), query_pool(0), load(0), metadata(0), compaction(0), schema_change(0), column_pool(0), page_cache(0), update(0), chunk_allocator(0), clone(0), consistency(0)
I0520 08:04:33.177757 1847 starlet.cc:83] Empty starmanager address, skip reporting!
I0520 08:04:34.146157 1830 storage_engine.cpp:652] 0 tablets checked. time elapse:31 seconds. compaction checker will be scheduled again in 1800 seconds
I0520 08:04:36.768929 1639 fragment_executor.cpp:165] Prepare(): query_id=f1d5a43a-f6e4-11ed-8cd2-0242ac110002 fragment_instance_id=f1d5a43a-f6e4-11ed-8cd2-0242ac110003 is_stream_pipeline=0 backend_num=0
I0520 08:04:36.913977 1648 pipeline_driver_executor.cpp:268] [Driver] Succeed to report exec state: fragment_instance_id=f1d5a43a-f6e4-11ed-8cd2-0242ac110003
I0520 08:04:43.187829 1847 starlet.cc:83] Empty starmanager address, skip reporting!
I0520 08:04:47.254209 1638 fragment_executor.cpp:165] Prepare(): query_id=fcc9e6dc-f6e4-11ed-8cd2-0242ac110002 fragment_instance_id=fcc9e6dc-f6e4-11ed-8cd2-0242ac110003 is_stream_pipeline=0 backend_num=0
W0520 08:04:47.269220 1648 exec_state_reporter.cpp:142] Retrying ReportExecStatus: No more data to read.
I0520 08:04:47.272375 1648 pipeline_driver_executor.cpp:261] [Driver] Fail to report exec state due to query not found: fragment_instance_id=fcc9e6dc-f6e4-11ed-8cd2-0242ac110003
I0520 08:04:47.531219 1636 fragment_executor.cpp:165] Prepare(): query_id=fcf4523e-f6e4-11ed-8cd2-0242ac110002 fragment_instance_id=fcf4523e-f6e4-11ed-8cd2-0242ac110003 is_stream_pipeline=0 backend_num=0
I0520 08:04:47.533965 1648 pipeline_driver_executor.cpp:268] [Driver] Succeed to report exec state: fragment_instance_id=fcf4523e-f6e4-11ed-8cd2-0242ac110003
I0520 08:04:47.932324 1446 daemon.cpp:201] Current memory statistics: process(5626064), query_pool(0), load(0), metadata(0), compaction(0), schema_change(0), column_pool(0), page_cache(0), update(0), chunk_allocator(0), clone(0), consistency(0)
I0520 08:04:53.194363 1847 starlet.cc:83] Empty starmanager address, skip reporting!
I0520 08:04:57.298460 1637 fragment_executor.cpp:165] Prepare(): query_id=02c6a5af-f6e5-11ed-8cd2-0242ac110002 fragment_instance_id=02c6a5af-f6e5-11ed-8cd2-0242ac110003 is_stream_pipeline=0 backend_num=0
W0520 08:04:57.299612 1648 exec_state_reporter.cpp:142] Retrying ReportExecStatus: No more data to read.
I0520 08:04:57.302456 1648 pipeline_driver_executor.cpp:268] [Driver] Succeed to report exec state: fragment_instance_id=02c6a5af-f6e5-11ed-8cd2-0242ac110003
fe.log
2023-05-20 08:04:03,766 INFO (main|1) [StarRocksFE.start():124] StarRocks FE starting, version: 3.0.0-48f4d81d7b
2023-05-20 08:04:03,781 INFO (main|1) [FrontendOptions.initAddrUseFqdn():212] Use FQDN init local addr, FQDN: 690bdb80db59, IP: 172.17.0.2
2023-05-20 08:04:04,330 INFO (main|1) [Auth.grantRoleInternal():831] grant operator to 'root'@'%', isReplay = true
2023-05-20 08:04:04,380 INFO (main|1) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role root[-1]
2023-05-20 08:04:04,390 INFO (main|1) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role db_admin[-2]
2023-05-20 08:04:04,392 INFO (main|1) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role cluster_admin[-3]
2023-05-20 08:04:04,392 INFO (main|1) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role user_admin[-4]
2023-05-20 08:04:04,393 INFO (main|1) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role public[-5]
2023-05-20 08:04:04,393 INFO (main|1) [GlobalStateMgr.initAuth():1015] using new privilege framework..
2023-05-20 08:04:04,821 INFO (main|1) [NodeMgr.getHelperNodes():645] get helper nodes: [690bdb80db59:9010]
2023-05-20 08:04:04,870 INFO (main|1) [NodeMgr.getClusterIdAndRoleOnStartup():241] very first time to start this node. role: FOLLOWER, node name: 690bdb80db59_9010_1684569844850
2023-05-20 08:04:04,891 INFO (main|1) [NodeMgr.getClusterIdAndRoleOnStartup():438] Current run_mode is shared_nothing
2023-05-20 08:04:04,892 INFO (main|1) [NodeMgr.getClusterIdAndRoleOnStartup():445] Got cluster id: 336538723, role: FOLLOWER, node name: 690bdb80db59_9010_1684569844850 and run_mode: shared_nothing
2023-05-20 08:04:04,894 INFO (main|1) [BDBEnvironment.ensureHelperInLocal():340] skip check local environment because helper node and local node are identical.
2023-05-20 08:04:04,974 INFO (main|1) [BDBEnvironment.setupEnvironment():269] start to setup bdb environment for 1 times
2023-05-20 08:04:05,649 WARN (MASTER 690bdb80db59_9010_1684569844850(1)|1) [BDBEnvironment.setupEnvironment():300] failed to setup environment because of UnknowMasterException for the first time, ignore it.
2023-05-20 08:04:10,650 INFO (MASTER 690bdb80db59_9010_1684569844850(1)|1) [BDBEnvironment.setupEnvironment():269] start to setup bdb environment for 2 times
2023-05-20 08:04:10,850 WARN (UNKNOWN 690bdb80db59_9010_1684569844850(-1)|1) [StateChangeExecutor.notifyNewFETypeTransfer():62] notify new FE type transfer: UNKNOWN
2023-05-20 08:04:10,892 INFO (UNKNOWN 690bdb80db59_9010_1684569844850(-1)|1) [BDBEnvironment.setupEnvironment():280] replicated environment is all set, wait for state change...
2023-05-20 08:04:10,892 WARN (RepNode 690bdb80db59_9010_1684569844850(-1)|76) [StateChangeExecutor.notifyNewFETypeTransfer():62] notify new FE type transfer: LEADER
2023-05-20 08:04:10,894 INFO (UNKNOWN 690bdb80db59_9010_1684569844850(-1)|1) [BDBEnvironment.setupEnvironment():288] state change done, current role LEADER
2023-05-20 08:04:10,905 INFO (UNKNOWN 690bdb80db59_9010_1684569844850(-1)|1) [BDBEnvironment.setupEnvironment():292] end setup bdb environment after 2 times
2023-05-20 08:04:10,916 INFO (UNKNOWN 690bdb80db59_9010_1684569844850(-1)|1) [GlobalStateMgr.loadImage():1333] image does not exist: /data/deploy/starrocks/fe/meta/image/image.0
2023-05-20 08:04:10,917 INFO (stateChangeExecutor|86) [StateChangeExecutor.runOneCycle():85] begin to transfer FE type from INIT to UNKNOWN
2023-05-20 08:04:10,918 INFO (stateChangeExecutor|86) [StateChangeExecutor.runOneCycle():179] finished to transfer FE type to INIT
2023-05-20 08:04:10,918 INFO (stateChangeExecutor|86) [StateChangeExecutor.runOneCycle():85] begin to transfer FE type from INIT to LEADER
2023-05-20 08:04:10,920 INFO (stateChangeExecutor|86) [BDBJEJournal.open():207] the very first time to open bdb, dbname is 1
2023-05-20 08:04:10,926 INFO (stateChangeExecutor|86) [BDBHA.fencing():94] start fencing, epoch number is 1
2023-05-20 08:04:10,937 INFO (stateChangeExecutor|86) [GlobalStateMgr.replayJournal():1942] skip replay journal because 0 <= 0
2023-05-20 08:04:11,034 INFO (stateChangeExecutor|86) [GlobalStateMgr.reInitializeNewPrivilegeOnUpgrade():3679] reinitialize privilege info before upgrade
2023-05-20 08:04:11,034 WARN (stateChangeExecutor|86) [AuthenticationProviderFactory.installPlugin():32] Plugin MYSQL_NATIVE_PASSWORD has already been installed!
2023-05-20 08:04:11,035 WARN (stateChangeExecutor|86) [AuthenticationProviderFactory.installPlugin():32] Plugin AUTHENTICATION_LDAP_SIMPLE has already been installed!
2023-05-20 08:04:11,035 WARN (stateChangeExecutor|86) [AuthenticationProviderFactory.installPlugin():32] Plugin AUTHENTICATION_KERBEROS has already been installed!
2023-05-20 08:04:11,035 INFO (stateChangeExecutor|86) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role root[-1]
2023-05-20 08:04:11,036 INFO (stateChangeExecutor|86) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role db_admin[-2]
2023-05-20 08:04:11,036 INFO (stateChangeExecutor|86) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role cluster_admin[-3]
2023-05-20 08:04:11,037 INFO (stateChangeExecutor|86) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role user_admin[-4]
2023-05-20 08:04:11,037 INFO (stateChangeExecutor|86) [AuthorizationManager.initBuiltinRoleUnlocked():286] create built-in role public[-5]
2023-05-20 08:04:11,041 INFO (stateChangeExecutor|86) [AuthUpgrader.upgradeAsLeader():79] start to upgrade as leader.
2023-05-20 08:04:11,042 INFO (stateChangeExecutor|86) [AuthUpgrader.upgradeUser():177] ignore root entry : global priv. host: %, user: root, priv: Node_priv Admin_priv , set by resolver: false, domain user ident: null
2023-05-20 08:04:11,119 INFO (stateChangeExecutor|86) [AuthUpgrader.upgradeAsLeader():83] upgraded as leader successfully.
2023-05-20 08:04:11,120 INFO (stateChangeExecutor|86) [GlobalStateMgr.transferToLeader():1161] set usingNewPrivilege to true after transfer to leader
2023-05-20 08:04:11,122 INFO (stateChangeExecutor|86) [GlobalStateMgr.startLeaderOnlyDaemonThreads():1220] checkpointer thread started. thread id is 91
2023-05-20 08:04:11,212 INFO (stateChangeExecutor|86) [GlobalStateMgr.transferToLeader():1178] leader finished to replay journal, can write now.
2023-05-20 08:04:11,224 INFO (leaderCheckpointer|91) [BDBJEJournal.getFinalizedJournalId():275] database names: 1
2023-05-20 08:04:11,224 INFO (leaderCheckpointer|91) [Checkpoint.runAfterCatalogReady():95] checkpoint imageVersion 0, checkPointVersion 0
2023-05-20 08:04:11,233 INFO (stateChangeExecutor|86) [StateChangeExecutor.runOneCycle():179] finished to transfer FE type to INIT
2023-05-20 08:04:11,258 INFO (tablet checker|34) [TabletChecker.doCheck():409] finished to check tablets. checkInPrios: true, unhealthy/total/added/in_sched/not_ready: 0/0/0/0/0, cost: 0 ms, in lock time: 0 ms
2023-05-20 08:04:11,258 INFO (tablet checker|34) [TabletChecker.doCheck():409] finished to check tablets. checkInPrios: false, unhealthy/total/added/in_sched/not_ready: 0/0/0/0/0, cost: 0 ms, in lock time: 0 ms
2023-05-20 08:04:11,265 INFO (tablet scheduler|33) [TabletScheduler.updateClusterLoadStatistic():396] update cluster load statistic:
2023-05-20 08:04:11,269 INFO (tablet checker|34) [TabletChecker.runAfterCatalogReady():200] TStat :
TStat num of tablet check round: 1 (+1)
TStat cost of tablet check(ms): 0 (+0)
TStat num of tablet checked in tablet checker: 0 (+0)
TStat num of unhealthy tablet checked in tablet checker: 0 (+0)
TStat num of tablet being added to tablet scheduler: 0 (+0)
TStat num of tablet schedule round: 0 (+0)
TStat cost of tablet schedule(ms): 0 (+0)
TStat num of tablet being scheduled: 0 (+0)
TStat num of tablet being scheduled succeeded: 0 (+0)
TStat num of tablet being scheduled failed: 0 (+0)
TStat num of tablet being scheduled discard: 0 (+0)
TStat num of tablet priority upgraded: 0 (+0)
TStat num of clone task: 0 (+0)
TStat num of clone task succeeded: 0 (+0)
TStat num of clone task failed: 0 (+0)
TStat num of clone task timeout: 0 (+0)
TStat num of replica missing error: 0 (+0)
TStat num of replica version missing error: 0 (+0)
TStat num of replica unavailable error: 0 (+0)
TStat num of replica redundant error: 0 (+0)
TStat num of replica missing in cluster error: 0 (+0)
TStat num of balance scheduled: 0 (+0)
TStat num of colocate replica mismatch: 0 (+0)
TStat num of colocate replica redundant: 0 (+0)
TStat num of colocate balancer running round: 0 (+0)
2023-05-20 08:04:11,289 WARN (Routine load task scheduler|40) [RoutineLoadTaskScheduler.process():117] no available be slot to scheduler tasks, wait for 10 seconds to scheduler again, you can set max_routine_load_task_num_per_be bigger in fe.conf, current value is 5
2023-05-20 08:04:11,289 INFO (tablet stat mgr|26) [TabletStatMgr.updateLocalTabletStat():158] finished to get local tablet stat of all backends. cost: 0 ms
2023-05-20 08:04:11,290 INFO (colocate group clone checker|99) [ColocateTableBalancer.matchGroup():856] finished to match colocate group. cost: 0 ms, in lock time: 0 ms
2023-05-20 08:04:11,291 INFO (TaskCleaner|89) [TaskManager.dropTasks():350] drop tasks:[]
2023-05-20 08:04:11,291 INFO (TaskCleaner|89) [TaskManager.removeExpiredTaskRuns():760] remove run history:[]
2023-05-20 08:04:11,292 INFO (tablet stat mgr|26) [TabletStatMgr.runAfterCatalogReady():126] finished to update index row num of all databases. cost: 2 ms
2023-05-20 08:04:12,917 INFO (UNKNOWN 690bdb80db59_9010_1684569844850(-1)|1) [GlobalStateMgr.waitForReady():1066] wait globalStateMgr to be ready. FE type: LEADER. is ready: true
2023-05-20 08:04:12,917 INFO (UNKNOWN 690bdb80db59_9010_1684569844850(-1)|1) [GlobalStateMgr.waitForReady():1052] globalStateMgr is ready. FE type: LEADER
2023-05-20 08:04:13,306 INFO (UNKNOWN 690bdb80db59_9010_1684569844850(-1)|1) [FeServer.start():65] thrift server started with port 9020.
2023-05-20 08:04:13,330 INFO (UNKNOWN 690bdb80db59_9010_1684569844850(-1)|1) [NMysqlServer.start():90] Open mysql server success on 9030
2023-05-20 08:04:13,332 INFO (UNKNOWN 690bdb80db59_9010_1684569844850(-1)|1) [QeService.start():78] QE service start.
2023-05-20 08:04:13,545 INFO (FE Http Server|115) [HttpServer$HttpServerThread.run():262] HttpServer started with port 8030
2023-05-20 08:04:17,483 INFO (starrocks-mysql-nio I/O-3|110) [AcceptListener.handleEvent():70] Connection established. remote=/172.17.0.2:41032
2023-05-20 08:04:18,748 INFO (starrocks-mysql-nio-pool-0|119) [QeProcessorImpl.registerQuery():95] register query id = eb12d648-f6e4-11ed-8cd2-0242ac110002
2023-05-20 08:04:18,772 WARN (starrocks-mysql-nio-pool-0|119) [CoordinatorPreprocessor.computeFragmentHosts():523] DataPartition UNPARTITIONED. Backend node not found. Check if any backend node is down.
2023-05-20 08:04:18,791 INFO (starrocks-mysql-nio-pool-0|119) [QeProcessorImpl.unregisterQuery():105] deregister query id eb12d648-f6e4-11ed-8cd2-0242ac110002
2023-05-20 08:04:18,791 INFO (starrocks-mysql-nio-pool-0|119) [StmtExecutor.execute():553] execute Exception, sql: select @@version_comment limit 1, error: Backend node not found. Check if any backend node is down.
2023-05-20 08:04:18,792 WARN (starrocks-mysql-nio-pool-0|119) [Coordinator.cancel():1361] cancel execution of query, this is outside invoke
2023-05-20 08:04:18,796 INFO (starrocks-mysql-nio-pool-0|119) [ResourceGroupMetricMgr.createQeuryResourceGroupMetrics():95] Add query_resource_group metric, resource group name is default_wg
2023-05-20 08:04:18,796 INFO (starrocks-mysql-nio-pool-0|119) [ResourceGroupMetricMgr.createQeuryResourceGroupMetrics():95] Add query_resource_group_err metric, resource group name is default_wg
2023-05-20 08:04:18,846 INFO (starrocks-mysql-nio-pool-0|119) [SystemInfoService.addBackend():247] finished to add Backend [id=10006, host=690bdb80db59, heartbeatPort=9050, alive=false]
2023-05-20 08:04:19,295 INFO (tablet scheduler|33) [TabletScheduler.updateWorkingSlots():250] add new backend 10006 with slots num: 0
2023-05-20 08:04:21,290 WARN (Routine load task scheduler|40) [RoutineLoadTaskScheduler.process():117] no available be slot to scheduler tasks, wait for 10 seconds to scheduler again, you can set max_routine_load_task_num_per_be bigger in fe.conf, current value is 5
2023-05-20 08:04:21,365 INFO (heartbeat mgr|32) [ComputeNode.handleHbResponse():476] Backend [id=10006, host=690bdb80db59, heartbeatPort=9050, alive=false] is alive, last start time: 1684569861363
2023-05-20 08:04:22,183 INFO (ReportHandler|113) [ReportHandler.tabletReport():364] backend[10006] reports 0 tablet(s). report version: 16845698420000
2023-05-20 08:04:22,183 INFO (thrift-server-pool-2|125) [ReportHandler.putToQueue():298] update be 10006 report task, type: DISK_REPORT
2023-05-20 08:04:22,186 INFO (ReportHandler|113) [TabletInvertedIndex.tabletReport():301] finished to do tablet diff with backend[10006]. sync: 0. metaDel: 0. foundValid: 0. foundInvalid: 0. migration: 0. found invalid transactions 0. found republish transactions 0 cost: 2 ms
2023-05-20 08:04:22,188 INFO (ReportHandler|113) [Backend.updateDisks():245] add new disk info. backendId: 10006, rootPath: /data/deploy/starrocks/be/storage
2023-05-20 08:04:22,192 INFO (ReportHandler|113) [ReportHandler.tabletReport():364] backend[10006] reports 0 tablet(s). report version: 16845698420000
2023-05-20 08:04:22,193 INFO (ReportHandler|113) [TabletInvertedIndex.tabletReport():301] finished to do tablet diff with backend[10006]. sync: 0. metaDel: 0. foundValid: 0. foundInvalid: 0. migration: 0. found invalid transactions 0. found republish transactions 0 cost: 0 ms
2023-05-20 08:04:28,862 INFO (starrocks-mysql-nio I/O-4|111) [AcceptListener.handleEvent():70] Connection established. remote=/127.0.0.1:33236
2023-05-20 08:04:28,874 INFO (starrocks-mysql-nio-pool-0|119) [QeProcessorImpl.registerQuery():95] register query id = f1d5a43a-f6e4-11ed-8cd2-0242ac110002
2023-05-20 08:04:31,280 INFO (tablet checker|34) [TabletChecker.doCheck():409] finished to check tablets. checkInPrios: true, unhealthy/total/added/in_sched/not_ready: 0/0/0/0/0, cost: 0 ms, in lock time: 0 ms
2023-05-20 08:04:31,280 INFO (tablet checker|34) [TabletChecker.doCheck():409] finished to check tablets. checkInPrios: false, unhealthy/total/added/in_sched/not_ready: 0/0/0/0/0, cost: 0 ms, in lock time: 0 ms
2023-05-20 08:04:31,282 INFO (tablet checker|34) [TabletChecker.runAfterCatalogReady():200] TStat :
TStat num of tablet check round: 2 (+1)
TStat cost of tablet check(ms): 0 (+0)
TStat num of tablet checked in tablet checker: 0 (+0)
TStat num of unhealthy tablet checked in tablet checker: 0 (+0)
TStat num of tablet being added to tablet scheduler: 0 (+0)
TStat num of tablet schedule round: 20 (+20)
TStat cost of tablet schedule(ms): 5 (+5)
TStat num of tablet being scheduled: 0 (+0)
TStat num of tablet being scheduled succeeded: 0 (+0)
TStat num of tablet being scheduled failed: 0 (+0)
TStat num of tablet being scheduled discard: 0 (+0)
TStat num of tablet priority upgraded: 0 (+0)
TStat num of clone task: 0 (+0)
TStat num of clone task succeeded: 0 (+0)
TStat num of clone task failed: 0 (+0)
TStat num of clone task timeout: 0 (+0)
TStat num of replica missing error: 0 (+0)
TStat num of replica version missing error: 0 (+0)
TStat num of replica unavailable error: 0 (+0)
TStat num of replica redundant error: 0 (+0)
TStat num of replica missing in cluster error: 0 (+0)
TStat num of balance scheduled: 0 (+0)
TStat num of colocate replica mismatch: 0 (+0)
TStat num of colocate replica redundant: 0 (+0)
TStat num of colocate balancer running round: 0 (+0)
2023-05-20 08:04:31,291 INFO (colocate group clone checker|99) [ColocateTableBalancer.matchGroup():856] finished to match colocate group. cost: 0 ms, in lock time: 0 ms
2023-05-20 08:04:31,305 INFO (tablet scheduler|33) [ClusterLoadStatistic.classifyBackendByLoad():165] classify backend by load. medium: HDD, avg load score: 0.0, low/mid/high: 0/1/0
2023-05-20 08:04:37,170 INFO (starrocks-mysql-nio-pool-0|119) [QeProcessorImpl.unregisterQuery():105] deregister query id f1d5a43a-f6e4-11ed-8cd2-0242ac110002
2023-05-20 08:04:37,172 INFO (starrocks-mysql-nio-pool-0|119) [ResourceGroupMetricMgr$QueryResourceGroupLatencyMetrics.<init>():150] Add query_resource_group_latency metric, resource group name is default_wg
2023-05-20 08:04:37,172 INFO (starrocks-mysql-nio-pool-0|119) [ResourceGroupMetricMgr$QueryResourceGroupLatencyMetrics.<init>():150] Add query_resource_group_latency metric, resource group name is default_wg
2023-05-20 08:04:37,172 INFO (starrocks-mysql-nio-pool-0|119) [ResourceGroupMetricMgr$QueryResourceGroupLatencyMetrics.<init>():150] Add query_resource_group_latency metric, resource group name is default_wg
2023-05-20 08:04:37,172 INFO (starrocks-mysql-nio-pool-0|119) [ResourceGroupMetricMgr$QueryResourceGroupLatencyMetrics.<init>():150] Add query_resource_group_latency metric, resource group name is default_wg
2023-05-20 08:04:37,172 INFO (starrocks-mysql-nio-pool-0|119) [ResourceGroupMetricMgr$QueryResourceGroupLatencyMetrics.<init>():150] Add query_resource_group_latency metric, resource group name is default_wg
2023-05-20 08:04:37,172 INFO (starrocks-mysql-nio-pool-0|119) [ResourceGroupMetricMgr$QueryResourceGroupLatencyMetrics.<init>():150] Add query_resource_group_latency metric, resource group name is default_wg
2023-05-20 08:04:37,207 INFO (starrocks-mysql-nio-pool-0|119) [BackendsProcDir.getClusterBackendInfos():207] backends proc get tablet num cost: 0, total cost: 5
2023-05-20 08:04:47,239 INFO (starrocks-mysql-nio I/O-4|111) [AcceptListener.handleEvent():70] Connection established. remote=/127.0.0.1:35194
2023-05-20 08:04:47,250 INFO (starrocks-mysql-nio-pool-0|119) [QeProcessorImpl.registerQuery():95] register query id = fcc9e6dc-f6e4-11ed-8cd2-0242ac110002
2023-05-20 08:04:47,260 INFO (starrocks-mysql-nio-pool-0|119) [QeProcessorImpl.unregisterQuery():105] deregister query id fcc9e6dc-f6e4-11ed-8cd2-0242ac110002
2023-05-20 08:04:47,265 INFO (starrocks-mysql-nio-pool-0|119) [BackendsProcDir.getClusterBackendInfos():207] backends proc get tablet num cost: 0, total cost: 2
2023-05-20 08:04:47,271 INFO (thrift-server-pool-6|138) [QeProcessorImpl.reportExecStatus():145] ReportExecStatus() failed, query does not exist, fragment_instance_id=fcc9e6dc-f6e4-11ed-8cd2-0242ac110003, query_id=fcc9e6dc-f6e4-11ed-8cd2-0242ac110002,
2023-05-20 08:04:47,518 INFO (starrocks-mysql-nio I/O-4|111) [AcceptListener.handleEvent():70] Connection established. remote=/172.17.0.1:47246
2023-05-20 08:04:47,527 INFO (starrocks-mysql-nio-pool-0|119) [QeProcessorImpl.registerQuery():95] register query id = fcf4523e-f6e4-11ed-8cd2-0242ac110002
2023-05-20 08:04:47,537 INFO (starrocks-mysql-nio-pool-0|119) [QeProcessorImpl.unregisterQuery():105] deregister query id fcf4523e-f6e4-11ed-8cd2-0242ac110002
2023-05-20 08:04:51,283 INFO (tablet checker|34) [TabletChecker.doCheck():409] finished to check tablets. checkInPrios: true, unhealthy/total/added/in_sched/not_ready: 0/0/0/0/0, cost: 0 ms, in lock time: 0 ms
2023-05-20 08:04:51,283 INFO (tablet checker|34) [TabletChecker.doCheck():409] finished to check tablets. checkInPrios: false, unhealthy/total/added/in_sched/not_ready: 0/0/0/0/0, cost: 0 ms, in lock time: 0 ms
2023-05-20 08:04:51,285 INFO (tablet checker|34) [TabletChecker.runAfterCatalogReady():200] TStat :
TStat num of tablet check round: 3 (+1)
TStat cost of tablet check(ms): 0 (+0)
TStat num of tablet checked in tablet checker: 0 (+0)
TStat num of unhealthy tablet checked in tablet checker: 0 (+0)
TStat num of tablet being added to tablet scheduler: 0 (+0)
TStat num of tablet schedule round: 40 (+20)
TStat cost of tablet schedule(ms): 8 (+3)
TStat num of tablet being scheduled: 0 (+0)
TStat num of tablet being scheduled succeeded: 0 (+0)
TStat num of tablet being scheduled failed: 0 (+0)
TStat num of tablet being scheduled discard: 0 (+0)
TStat num of tablet priority upgraded: 0 (+0)
TStat num of clone task: 0 (+0)
TStat num of clone task succeeded: 0 (+0)
TStat num of clone task failed: 0 (+0)
TStat num of clone task timeout: 0 (+0)
TStat num of replica missing error: 0 (+0)
TStat num of replica version missing error: 0 (+0)
TStat num of replica unavailable error: 0 (+0)
TStat num of replica redundant error: 0 (+0)
TStat num of replica missing in cluster error: 0 (+0)
TStat num of balance scheduled: 0 (+0)
TStat num of colocate replica mismatch: 0 (+0)
TStat num of colocate replica redundant: 0 (+0)
TStat num of colocate balancer running round: 0 (+0)
2023-05-20 08:04:51,291 INFO (colocate group clone checker|99) [ColocateTableBalancer.matchGroup():856] finished to match colocate group. cost: 0 ms, in lock time: 0 ms
2023-05-20 08:04:51,337 INFO (tablet scheduler|33) [ClusterLoadStatistic.classifyBackendByLoad():165] classify backend by load. medium: HDD, avg load score: 0.0, low/mid/high: 0/1/0
2023-05-20 08:04:57,284 INFO (starrocks-mysql-nio I/O-3|110) [AcceptListener.handleEvent():70] Connection established. remote=/127.0.0.1:38198
2023-05-20 08:04:57,294 INFO (starrocks-mysql-nio-pool-0|119) [QeProcessorImpl.registerQuery():95] register query id = 02c6a5af-f6e5-11ed-8cd2-0242ac110002
2023-05-20 08:04:57,304 INFO (starrocks-mysql-nio-pool-0|119) [QeProcessorImpl.unregisterQuery():105] deregister query id 02c6a5af-f6e5-11ed-8cd2-0242ac110002
2023-05-20 08:04:57,308 INFO (starrocks-mysql-nio-pool-0|119) [BackendsProcDir.getClusterBackendInfos():207] backends proc get tablet num cost: 0, total cost: 2
2023-05-20 08:05:02,074 ERROR (thrift-server-pool-7|139) [SRTThreadPoolServer$WorkerProcess.run():319] Thrift Error occurred during processing of message.
org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:455) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:354) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:243) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) ~[libthrift-0.13.0.jar:0.13.0]
at com.starrocks.common.SRTThreadPoolServer$WorkerProcess.run(SRTThreadPoolServer.java:311) [starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:186) ~[?:?]
at java.net.SocketInputStream.read(SocketInputStream.java:140) ~[?:?]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:252) ~[?:?]
at java.io.BufferedInputStream.read1(BufferedInputStream.java:292) ~[?:?]
at java.io.BufferedInputStream.read(BufferedInputStream.java:351) ~[?:?]
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:125) ~[libthrift-0.13.0.jar:0.13.0]
... 9 more
2023-05-20 08:05:02,074 ERROR (thrift-server-pool-2|125) [SRTThreadPoolServer$WorkerProcess.run():319] Thrift Error occurred during processing of message.
org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:455) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:354) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:243) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) ~[libthrift-0.13.0.jar:0.13.0]
at com.starrocks.common.SRTThreadPoolServer$WorkerProcess.run(SRTThreadPoolServer.java:311) [starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:186) ~[?:?]
at java.net.SocketInputStream.read(SocketInputStream.java:140) ~[?:?]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:252) ~[?:?]
at java.io.BufferedInputStream.read1(BufferedInputStream.java:292) ~[?:?]
at java.io.BufferedInputStream.read(BufferedInputStream.java:351) ~[?:?]
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:125) ~[libthrift-0.13.0.jar:0.13.0]
... 9 more
2023-05-20 08:05:02,074 ERROR (thrift-server-pool-0|123) [SRTThreadPoolServer$WorkerProcess.run():319] Thrift Error occurred during processing of message.
org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:455) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:354) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:243) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) ~[libthrift-0.13.0.jar:0.13.0]
at com.starrocks.common.SRTThreadPoolServer$WorkerProcess.run(SRTThreadPoolServer.java:311) [starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:186) ~[?:?]
at java.net.SocketInputStream.read(SocketInputStream.java:140) ~[?:?]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:252) ~[?:?]
at java.io.BufferedInputStream.read1(BufferedInputStream.java:292) ~[?:?]
at java.io.BufferedInputStream.read(BufferedInputStream.java:351) ~[?:?]
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:125) ~[libthrift-0.13.0.jar:0.13.0]
... 9 more
2023-05-20 08:05:02,074 ERROR (thrift-server-pool-4|127) [SRTThreadPoolServer$WorkerProcess.run():319] Thrift Error occurred during processing of message.
org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:455) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:354) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:243) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) ~[libthrift-0.13.0.jar:0.13.0]
at com.starrocks.common.SRTThreadPoolServer$WorkerProcess.run(SRTThreadPoolServer.java:311) [starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:186) ~[?:?]
at java.net.SocketInputStream.read(SocketInputStream.java:140) ~[?:?]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:252) ~[?:?]
at java.io.BufferedInputStream.read1(BufferedInputStream.java:292) ~[?:?]
at java.io.BufferedInputStream.read(BufferedInputStream.java:351) ~[?:?]
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:125) ~[libthrift-0.13.0.jar:0.13.0]
... 9 more
2023-05-20 08:05:02,075 ERROR (thrift-server-pool-3|126) [SRTThreadPoolServer$WorkerProcess.run():319] Thrift Error occurred during processing of message.
org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:455) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:354) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:243) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) ~[libthrift-0.13.0.jar:0.13.0]
at com.starrocks.common.SRTThreadPoolServer$WorkerProcess.run(SRTThreadPoolServer.java:311) [starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:186) ~[?:?]
at java.net.SocketInputStream.read(SocketInputStream.java:140) ~[?:?]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:252) ~[?:?]
at java.io.BufferedInputStream.read1(BufferedInputStream.java:292) ~[?:?]
at java.io.BufferedInputStream.read(BufferedInputStream.java:351) ~[?:?]
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:125) ~[libthrift-0.13.0.jar:0.13.0]
... 9 more
2023-05-20 08:05:02,075 ERROR (thrift-server-pool-1|124) [SRTThreadPoolServer$WorkerProcess.run():319] Thrift Error occurred during processing of message.
org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:127) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:455) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:354) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:243) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) ~[libthrift-0.13.0.jar:0.13.0]
at com.starrocks.common.SRTThreadPoolServer$WorkerProcess.run(SRTThreadPoolServer.java:311) [starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.net.SocketException: Connection reset
at java.net.SocketInputStream.read(SocketInputStream.java:186) ~[?:?]
at java.net.SocketInputStream.read(SocketInputStream.java:140) ~[?:?]
at java.io.BufferedInputStream.fill(BufferedInputStream.java:252) ~[?:?]
at java.io.BufferedInputStream.read1(BufferedInputStream.java:292) ~[?:?]
at java.io.BufferedInputStream.read(BufferedInputStream.java:351) ~[?:?]
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:125) ~[libthrift-0.13.0.jar:0.13.0]
... 9 more
2023-05-20 08:05:06,472 WARN (heartbeat-mgr-pool-4|131) [HeartbeatMgr$BackendHeartbeatHandler.call():337] backend heartbeat got exception, addr: 690bdb80db59:9050
org.apache.thrift.transport.TTransportException: Socket is closed by peer.
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:130) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:455) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:354) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:243) ~[libthrift-0.13.0.jar:0.13.0]
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77) ~[libthrift-0.13.0.jar:0.13.0]
at com.starrocks.thrift.HeartbeatService$Client.recv_heartbeat(HeartbeatService.java:61) ~[starrocks-fe.jar:?]
at com.starrocks.thrift.HeartbeatService$Client.heartbeat(HeartbeatService.java:48) ~[starrocks-fe.jar:?]
at com.starrocks.system.HeartbeatMgr$BackendHeartbeatHandler.call(HeartbeatMgr.java:297) [starrocks-fe.jar:?]
at com.starrocks.system.HeartbeatMgr$BackendHeartbeatHandler.call(HeartbeatMgr.java:275) [starrocks-fe.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
2023-05-20 08:05:06,474 WARN (heartbeat mgr|32) [HeartbeatMgr.runAfterCatalogReady():177] get bad heartbeat response: type: BACKEND, status: BAD, msg: Socket is closed by peer.
2023-05-20 08:05:07,326 INFO (starrocks-mysql-nio I/O-3|110) [AcceptListener.handleEvent():70] Connection established. remote=/127.0.0.1:36096
2023-05-20 08:05:07,335 INFO (starrocks-mysql-nio-pool-0|119) [QeProcessorImpl.registerQuery():95] register query id = 08c2c842-f6e5-11ed-8cd2-0242ac110002
2023-05-20 08:05:07,346 WARN (starrocks-mysql-nio-pool-0|119) [BackendServiceClient.execBatchPlanFragmentsAsync():131] Execute batch plan fragments catch a exception, address=690bdb80db59:8060
java.lang.RuntimeException: Unable to validate object
at com.baidu.jprotobuf.pbrpc.transport.ChannelPool.getChannel(ChannelPool.java:86) ~[jprotobuf-rpc-core-4.2.1.jar:?]
at com.baidu.jprotobuf.pbrpc.transport.RpcChannel.getConnection(RpcChannel.java:73) ~[jprotobuf-rpc-core-4.2.1.jar:?]
at com.baidu.jprotobuf.pbrpc.client.ProtobufRpcProxy.invoke(ProtobufRpcProxy.java:499) ~[jprotobuf-rpc-core-4.2.1.jar:?]
at com.sun.proxy.$Proxy32.execBatchPlanFragmentsAsync(Unknown Source) ~[?:?]
at com.starrocks.rpc.BackendServiceClient.execBatchPlanFragmentsAsync(BackendServiceClient.java:117) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator$BackendExecState.execRemoteBatchFragmentsAsync(Coordinator.java:1962) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.deliverExecBatchFragmentsRequests(Coordinator.java:995) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.deliverExecFragments(Coordinator.java:536) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.exec(Coordinator.java:464) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.handleQueryStmt(StmtExecutor.java:766) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.execute(StmtExecutor.java:458) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:348) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.dispatch(ConnectProcessor.java:462) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.processOnce(ConnectProcessor.java:728) ~[starrocks-fe.jar:?]
at com.starrocks.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:69) ~[starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.util.NoSuchElementException: Unable to validate object
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:506) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363) ~[commons-pool2-2.3.jar:2.3]
at com.baidu.jprotobuf.pbrpc.transport.ChannelPool.getChannel(ChannelPool.java:80) ~[jprotobuf-rpc-core-4.2.1.jar:?]
... 17 more
2023-05-20 08:05:07,352 WARN (starrocks-mysql-nio-pool-0|119) [Coordinator.deliverExecBatchFragmentsRequests():1027] exec plan fragment failed, errmsg=Unable to validate object, host: 690bdb80db59, code: THRIFT_RPC_ERROR, fragmentId=F00, backend=690bdb80db59:9060
2023-05-20 08:05:07,361 WARN (starrocks-mysql-nio-pool-0|119) [BackendServiceClient.cancelPlanFragmentAsync():174] Cancel plan fragment catch a exception, address=690bdb80db59:8060
java.lang.RuntimeException: Unable to validate object
at com.baidu.jprotobuf.pbrpc.transport.ChannelPool.getChannel(ChannelPool.java:86) ~[jprotobuf-rpc-core-4.2.1.jar:?]
at com.baidu.jprotobuf.pbrpc.transport.RpcChannel.getConnection(RpcChannel.java:73) ~[jprotobuf-rpc-core-4.2.1.jar:?]
at com.baidu.jprotobuf.pbrpc.client.ProtobufRpcProxy.invoke(ProtobufRpcProxy.java:499) ~[jprotobuf-rpc-core-4.2.1.jar:?]
at com.sun.proxy.$Proxy32.cancelPlanFragmentAsync(Unknown Source) ~[?:?]
at com.starrocks.rpc.BackendServiceClient.cancelPlanFragmentAsync(BackendServiceClient.java:157) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator$BackendExecState.cancelFragmentInstance(Coordinator.java:1870) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.cancelRemoteFragmentsAsync(Coordinator.java:1392) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.cancelInternal(Coordinator.java:1379) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.handleErrorBackendExecState(Coordinator.java:545) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.deliverExecBatchFragmentsRequests(Coordinator.java:1046) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.deliverExecFragments(Coordinator.java:536) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.exec(Coordinator.java:464) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.handleQueryStmt(StmtExecutor.java:766) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.execute(StmtExecutor.java:458) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:348) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.dispatch(ConnectProcessor.java:462) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.processOnce(ConnectProcessor.java:728) ~[starrocks-fe.jar:?]
at com.starrocks.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:69) ~[starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.util.NoSuchElementException: Unable to validate object
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:506) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363) ~[commons-pool2-2.3.jar:2.3]
at com.baidu.jprotobuf.pbrpc.transport.ChannelPool.getChannel(ChannelPool.java:80) ~[jprotobuf-rpc-core-4.2.1.jar:?]
... 20 more
2023-05-20 08:05:07,362 WARN (starrocks-mysql-nio-pool-0|119) [Coordinator$BackendExecState.cancelFragmentInstance():1873] cancel plan fragment get a exception, address=690bdb80db59:8060
2023-05-20 08:05:07,367 INFO (starrocks-mysql-nio-pool-0|119) [Coordinator.cancelInternal():1384] unfinished instance: {}
2023-05-20 08:05:07,368 WARN (starrocks-mysql-nio-pool-0|119) [SimpleScheduler.addToBlacklist():171] add black list 10006
2023-05-20 08:05:07,368 WARN (starrocks-mysql-nio-pool-0|119) [StmtExecutor.execute():472] retry 1 times. stmt: select @@version_comment limit 1
2023-05-20 08:05:07,368 INFO (starrocks-mysql-nio-pool-0|119) [QeProcessorImpl.unregisterQuery():105] deregister query id 08c2c842-f6e5-11ed-8cd2-0242ac110002
2023-05-20 08:05:07,369 INFO (starrocks-mysql-nio-pool-0|119) [StmtExecutor.execute():450] transfer QueryId: 08c2c842-f6e5-11ed-8cd2-0242ac110002 to 1efd4a5a-6b55-4349-bbbc-3e3156696d2a
2023-05-20 08:05:07,369 INFO (starrocks-mysql-nio-pool-0|119) [QeProcessorImpl.registerQuery():95] register query id = 1efd4a5a-6b55-4349-bbbc-3e3156696d2a
2023-05-20 08:05:07,370 WARN (starrocks-mysql-nio-pool-0|119) [CoordinatorPreprocessor.computeFragmentHosts():523] DataPartition UNPARTITIONED. Backend node not found. Check if any backend node is down.
2023-05-20 08:05:07,370 INFO (starrocks-mysql-nio-pool-0|119) [QeProcessorImpl.unregisterQuery():105] deregister query id 1efd4a5a-6b55-4349-bbbc-3e3156696d2a
2023-05-20 08:05:07,371 INFO (starrocks-mysql-nio-pool-0|119) [StmtExecutor.execute():553] execute Exception, sql: select @@version_comment limit 1, error: Backend node not found. Check if any backend node is down.backend: [690bdb80db59 alive: true inBlacklist: true]
2023-05-20 08:05:07,371 WARN (starrocks-mysql-nio-pool-0|119) [Coordinator.cancel():1361] cancel execution of query, this is outside invoke
2023-05-20 08:05:07,376 INFO (starrocks-mysql-nio-pool-1|149) [BackendsProcDir.getClusterBackendInfos():207] backends proc get tablet num cost: 0, total cost: 2
2023-05-20 08:05:11,226 INFO (leaderCheckpointer|91) [BDBJEJournal.getFinalizedJournalId():275] database names: 1
2023-05-20 08:05:11,226 INFO (leaderCheckpointer|91) [Checkpoint.runAfterCatalogReady():95] checkpoint imageVersion 0, checkPointVersion 0
2023-05-20 08:05:11,281 INFO (statistics meta manager|23) [StatisticsMetaManager.createDatabase():66] create statistics db start
2023-05-20 08:05:11,285 INFO (statistics meta manager|23) [LocalMetastore.createDb():398] createDb dbName = _statistics_, id = 10007
2023-05-20 08:05:11,286 INFO (statistics meta manager|23) [StatisticsMetaManager.createDatabase():74] create statistics db down
2023-05-20 08:05:11,286 INFO (statistics meta manager|23) [StatisticsMetaManager.createSampleStatisticsTable():133] create sample statistics table start
2023-05-20 08:05:11,288 INFO (tablet checker|34) [TabletChecker.doCheck():409] finished to check tablets. checkInPrios: true, unhealthy/total/added/in_sched/not_ready: 0/0/0/0/0, cost: 2 ms, in lock time: 2 ms
2023-05-20 08:05:11,288 INFO (tablet checker|34) [TabletChecker.doCheck():409] finished to check tablets. checkInPrios: false, unhealthy/total/added/in_sched/not_ready: 0/0/0/0/0, cost: 0 ms, in lock time: 0 ms
2023-05-20 08:05:11,289 INFO (tablet checker|34) [TabletChecker.runAfterCatalogReady():200] TStat :
TStat num of tablet check round: 4 (+1)
TStat cost of tablet check(ms): 2 (+2)
TStat num of tablet checked in tablet checker: 0 (+0)
TStat num of unhealthy tablet checked in tablet checker: 0 (+0)
TStat num of tablet being added to tablet scheduler: 0 (+0)
TStat num of tablet schedule round: 60 (+20)
TStat cost of tablet schedule(ms): 9 (+1)
TStat num of tablet being scheduled: 0 (+0)
TStat num of tablet being scheduled succeeded: 0 (+0)
TStat num of tablet being scheduled failed: 0 (+0)
TStat num of tablet being scheduled discard: 0 (+0)
TStat num of tablet priority upgraded: 0 (+0)
TStat num of clone task: 0 (+0)
TStat num of clone task succeeded: 0 (+0)
TStat num of clone task failed: 0 (+0)
TStat num of clone task timeout: 0 (+0)
TStat num of replica missing error: 0 (+0)
TStat num of replica version missing error: 0 (+0)
TStat num of replica unavailable error: 0 (+0)
TStat num of replica redundant error: 0 (+0)
TStat num of replica missing in cluster error: 0 (+0)
TStat num of balance scheduled: 0 (+0)
TStat num of colocate replica mismatch: 0 (+0)
TStat num of colocate replica redundant: 0 (+0)
TStat num of colocate balancer running round: 0 (+0)
2023-05-20 08:05:11,292 INFO (colocate group clone checker|99) [ColocateTableBalancer.matchGroup():856] finished to match colocate group. cost: 0 ms, in lock time: 0 ms
2023-05-20 08:05:11,302 INFO (statistics meta manager|23) [Database.checkDataSizeQuota():414] database[_statistics_] data quota: left bytes: 8388608.000 TB / total: 8388608.000 TB
2023-05-20 08:05:11,302 INFO (statistics meta manager|23) [Database.checkReplicaQuota():425] database[_statistics_] replica quota: left number: 9223372036854775807 / total: 9223372036854775807
2023-05-20 08:05:11,338 WARN (agent-task-pool-3|130) [AgentBatchTask.run():195] task exec error. backend[10006]
org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
at org.apache.thrift.transport.TSocket.open(TSocket.java:226) ~[libthrift-0.13.0.jar:0.13.0]
at com.starrocks.common.GenericPool$ThriftClientFactory.create(GenericPool.java:144) ~[starrocks-fe.jar:?]
at com.starrocks.common.GenericPool$ThriftClientFactory.create(GenericPool.java:129) ~[starrocks-fe.jar:?]
at org.apache.commons.pool2.BaseKeyedPooledObjectFactory.makeObject(BaseKeyedPooledObjectFactory.java:62) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.create(GenericKeyedObjectPool.java:1036) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:356) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:278) ~[commons-pool2-2.3.jar:2.3]
at com.starrocks.common.GenericPool.borrowObject(GenericPool.java:101) ~[starrocks-fe.jar:?]
at com.starrocks.task.AgentBatchTask.run(AgentBatchTask.java:180) [starrocks-fe.jar:?]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) [?:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:?]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) ~[?:?]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) ~[?:?]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) ~[?:?]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:?]
at java.net.Socket.connect(Socket.java:609) ~[?:?]
at org.apache.thrift.transport.TSocket.open(TSocket.java:221) ~[libthrift-0.13.0.jar:0.13.0]
... 13 more
2023-05-20 08:05:11,345 INFO (tablet scheduler|33) [ClusterLoadStatistic.classifyBackendByLoad():165] classify backend by load. medium: HDD, avg load score: 0.5, low/mid/high: 0/1/0
2023-05-20 08:05:11,350 INFO (tablet scheduler|33) [TabletScheduler.updateClusterLoadStatistic():396] update cluster load statistic:
be id: 10006, is available: true, mediums: [{medium: HDD, replica: 10, used: 0, total: 68.9GB, score: 0.5},{medium: SSD, replica: 0, used: 0, total: 0B, score: NaN},], paths: [{path: /data/deploy/starrocks/be/storage, path hash: 5987063680716520938, be: 10006, medium: HDD, used: 0, total: 74085449728},]
2023-05-20 08:05:11,490 WARN (heartbeat-mgr-pool-7|135) [HeartbeatMgr$BackendHeartbeatHandler.call():337] backend heartbeat got exception, addr: 690bdb80db59:9050
org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
at org.apache.thrift.transport.TSocket.open(TSocket.java:226) ~[libthrift-0.13.0.jar:0.13.0]
at com.starrocks.common.GenericPool$ThriftClientFactory.create(GenericPool.java:144) ~[starrocks-fe.jar:?]
at com.starrocks.common.GenericPool$ThriftClientFactory.create(GenericPool.java:129) ~[starrocks-fe.jar:?]
at org.apache.commons.pool2.BaseKeyedPooledObjectFactory.makeObject(BaseKeyedPooledObjectFactory.java:62) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.create(GenericKeyedObjectPool.java:1036) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:356) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:278) ~[commons-pool2-2.3.jar:2.3]
at com.starrocks.common.GenericPool.borrowObject(GenericPool.java:101) ~[starrocks-fe.jar:?]
at com.starrocks.system.HeartbeatMgr$BackendHeartbeatHandler.call(HeartbeatMgr.java:289) [starrocks-fe.jar:?]
at com.starrocks.system.HeartbeatMgr$BackendHeartbeatHandler.call(HeartbeatMgr.java:275) [starrocks-fe.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:?]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) ~[?:?]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) ~[?:?]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) ~[?:?]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:?]
at java.net.Socket.connect(Socket.java:609) ~[?:?]
at org.apache.thrift.transport.TSocket.open(TSocket.java:221) ~[libthrift-0.13.0.jar:0.13.0]
... 13 more
2023-05-20 08:05:11,491 WARN (heartbeat mgr|32) [HeartbeatMgr.runAfterCatalogReady():177] get bad heartbeat response: type: BACKEND, status: BAD, msg: java.net.ConnectException: Connection refused (Connection refused)
2023-05-20 08:05:16,502 WARN (heartbeat-mgr-pool-0|105) [HeartbeatMgr$BackendHeartbeatHandler.call():337] backend heartbeat got exception, addr: 690bdb80db59:9050
org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
at org.apache.thrift.transport.TSocket.open(TSocket.java:226) ~[libthrift-0.13.0.jar:0.13.0]
at com.starrocks.common.GenericPool$ThriftClientFactory.create(GenericPool.java:144) ~[starrocks-fe.jar:?]
at com.starrocks.common.GenericPool$ThriftClientFactory.create(GenericPool.java:129) ~[starrocks-fe.jar:?]
at org.apache.commons.pool2.BaseKeyedPooledObjectFactory.makeObject(BaseKeyedPooledObjectFactory.java:62) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.create(GenericKeyedObjectPool.java:1036) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:356) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:278) ~[commons-pool2-2.3.jar:2.3]
at com.starrocks.common.GenericPool.borrowObject(GenericPool.java:101) ~[starrocks-fe.jar:?]
at com.starrocks.system.HeartbeatMgr$BackendHeartbeatHandler.call(HeartbeatMgr.java:289) [starrocks-fe.jar:?]
at com.starrocks.system.HeartbeatMgr$BackendHeartbeatHandler.call(HeartbeatMgr.java:275) [starrocks-fe.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:?]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) ~[?:?]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) ~[?:?]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) ~[?:?]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:?]
at java.net.Socket.connect(Socket.java:609) ~[?:?]
at org.apache.thrift.transport.TSocket.open(TSocket.java:221) ~[libthrift-0.13.0.jar:0.13.0]
... 13 more
2023-05-20 08:05:16,503 WARN (heartbeat mgr|32) [HeartbeatMgr.runAfterCatalogReady():177] get bad heartbeat response: type: BACKEND, status: BAD, msg: java.net.ConnectException: Connection refused (Connection refused)
2023-05-20 08:05:17,396 INFO (starrocks-mysql-nio I/O-3|110) [AcceptListener.handleEvent():70] Connection established. remote=/127.0.0.1:59202
2023-05-20 08:05:17,405 INFO (starrocks-mysql-nio-pool-1|149) [QeProcessorImpl.registerQuery():95] register query id = 0ec357a5-f6e5-11ed-8cd2-0242ac110002
2023-05-20 08:05:17,419 WARN (starrocks-mysql-nio-pool-1|149) [BackendServiceClient.execBatchPlanFragmentsAsync():131] Execute batch plan fragments catch a exception, address=690bdb80db59:8060
java.lang.RuntimeException: Unable to validate object
at com.baidu.jprotobuf.pbrpc.transport.ChannelPool.getChannel(ChannelPool.java:86) ~[jprotobuf-rpc-core-4.2.1.jar:?]
at com.baidu.jprotobuf.pbrpc.transport.RpcChannel.getConnection(RpcChannel.java:73) ~[jprotobuf-rpc-core-4.2.1.jar:?]
at com.baidu.jprotobuf.pbrpc.client.ProtobufRpcProxy.invoke(ProtobufRpcProxy.java:499) ~[jprotobuf-rpc-core-4.2.1.jar:?]
at com.sun.proxy.$Proxy32.execBatchPlanFragmentsAsync(Unknown Source) ~[?:?]
at com.starrocks.rpc.BackendServiceClient.execBatchPlanFragmentsAsync(BackendServiceClient.java:117) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator$BackendExecState.execRemoteBatchFragmentsAsync(Coordinator.java:1962) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.deliverExecBatchFragmentsRequests(Coordinator.java:995) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.deliverExecFragments(Coordinator.java:536) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.exec(Coordinator.java:464) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.handleQueryStmt(StmtExecutor.java:766) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.execute(StmtExecutor.java:458) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:348) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.dispatch(ConnectProcessor.java:462) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.processOnce(ConnectProcessor.java:728) ~[starrocks-fe.jar:?]
at com.starrocks.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:69) ~[starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.util.NoSuchElementException: Unable to validate object
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:506) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363) ~[commons-pool2-2.3.jar:2.3]
at com.baidu.jprotobuf.pbrpc.transport.ChannelPool.getChannel(ChannelPool.java:80) ~[jprotobuf-rpc-core-4.2.1.jar:?]
... 17 more
2023-05-20 08:05:17,420 WARN (starrocks-mysql-nio-pool-1|149) [Coordinator.deliverExecBatchFragmentsRequests():1027] exec plan fragment failed, errmsg=Unable to validate object, host: 690bdb80db59, code: THRIFT_RPC_ERROR, fragmentId=F00, backend=690bdb80db59:9060
2023-05-20 08:05:17,434 WARN (starrocks-mysql-nio-pool-1|149) [BackendServiceClient.cancelPlanFragmentAsync():174] Cancel plan fragment catch a exception, address=690bdb80db59:8060
java.lang.RuntimeException: Unable to validate object
at com.baidu.jprotobuf.pbrpc.transport.ChannelPool.getChannel(ChannelPool.java:86) ~[jprotobuf-rpc-core-4.2.1.jar:?]
at com.baidu.jprotobuf.pbrpc.transport.RpcChannel.getConnection(RpcChannel.java:73) ~[jprotobuf-rpc-core-4.2.1.jar:?]
at com.baidu.jprotobuf.pbrpc.client.ProtobufRpcProxy.invoke(ProtobufRpcProxy.java:499) ~[jprotobuf-rpc-core-4.2.1.jar:?]
at com.sun.proxy.$Proxy32.cancelPlanFragmentAsync(Unknown Source) ~[?:?]
at com.starrocks.rpc.BackendServiceClient.cancelPlanFragmentAsync(BackendServiceClient.java:157) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator$BackendExecState.cancelFragmentInstance(Coordinator.java:1870) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.cancelRemoteFragmentsAsync(Coordinator.java:1392) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.cancelInternal(Coordinator.java:1379) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.handleErrorBackendExecState(Coordinator.java:545) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.deliverExecBatchFragmentsRequests(Coordinator.java:1046) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.deliverExecFragments(Coordinator.java:536) ~[starrocks-fe.jar:?]
at com.starrocks.qe.Coordinator.exec(Coordinator.java:464) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.handleQueryStmt(StmtExecutor.java:766) ~[starrocks-fe.jar:?]
at com.starrocks.qe.StmtExecutor.execute(StmtExecutor.java:458) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.handleQuery(ConnectProcessor.java:348) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.dispatch(ConnectProcessor.java:462) ~[starrocks-fe.jar:?]
at com.starrocks.qe.ConnectProcessor.processOnce(ConnectProcessor.java:728) ~[starrocks-fe.jar:?]
at com.starrocks.mysql.nio.ReadListener.lambda$handleEvent$0(ReadListener.java:69) ~[starrocks-fe.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.util.NoSuchElementException: Unable to validate object
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:506) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363) ~[commons-pool2-2.3.jar:2.3]
at com.baidu.jprotobuf.pbrpc.transport.ChannelPool.getChannel(ChannelPool.java:80) ~[jprotobuf-rpc-core-4.2.1.jar:?]
... 20 more
2023-05-20 08:05:17,435 WARN (starrocks-mysql-nio-pool-1|149) [Coordinator$BackendExecState.cancelFragmentInstance():1873] cancel plan fragment get a exception, address=690bdb80db59:8060
2023-05-20 08:05:17,436 INFO (starrocks-mysql-nio-pool-1|149) [Coordinator.cancelInternal():1384] unfinished instance: {}
2023-05-20 08:05:17,436 WARN (starrocks-mysql-nio-pool-1|149) [SimpleScheduler.addToBlacklist():171] add black list 10006
2023-05-20 08:05:17,436 WARN (starrocks-mysql-nio-pool-1|149) [StmtExecutor.execute():472] retry 1 times. stmt: select @@version_comment limit 1
2023-05-20 08:05:17,436 INFO (starrocks-mysql-nio-pool-1|149) [QeProcessorImpl.unregisterQuery():105] deregister query id 0ec357a5-f6e5-11ed-8cd2-0242ac110002
2023-05-20 08:05:17,437 INFO (starrocks-mysql-nio-pool-1|149) [StmtExecutor.execute():450] transfer QueryId: 0ec357a5-f6e5-11ed-8cd2-0242ac110002 to f625712b-ddaa-41cb-a2aa-5223ca900f15
2023-05-20 08:05:17,437 INFO (starrocks-mysql-nio-pool-1|149) [QeProcessorImpl.registerQuery():95] register query id = f625712b-ddaa-41cb-a2aa-5223ca900f15
2023-05-20 08:05:17,438 WARN (starrocks-mysql-nio-pool-1|149) [CoordinatorPreprocessor.computeFragmentHosts():523] DataPartition UNPARTITIONED. Backend node not found. Check if any backend node is down.
2023-05-20 08:05:17,438 INFO (starrocks-mysql-nio-pool-1|149) [QeProcessorImpl.unregisterQuery():105] deregister query id f625712b-ddaa-41cb-a2aa-5223ca900f15
2023-05-20 08:05:17,438 INFO (starrocks-mysql-nio-pool-1|149) [StmtExecutor.execute():553] execute Exception, sql: select @@version_comment limit 1, error: Backend node not found. Check if any backend node is down.backend: [690bdb80db59 alive: true inBlacklist: true]
2023-05-20 08:05:17,439 WARN (starrocks-mysql-nio-pool-1|149) [Coordinator.cancel():1361] cancel execution of query, this is outside invoke
2023-05-20 08:05:17,442 INFO (starrocks-mysql-nio-pool-1|149) [BackendsProcDir.getClusterBackendInfos():207] backends proc get tablet num cost: 0, total cost: 2
2023-05-20 08:05:21,339 WARN (statistics meta manager|23) [LocalMetastore.waitForFinished():1938] fail to create tablet: timed out. unfinished replicas(3/10): 10017(690bdb80db59) 10019(690bdb80db59) 10021(690bdb80db59) timeout=10s
2023-05-20 08:05:21,340 WARN (statistics meta manager|23) [StatisticsMetaManager.createSampleStatisticsTable():155] Failed to create sample statisticsfail to create tablet: timed out. unfinished replicas(3/10): 10017(690bdb80db59) 10019(690bdb80db59) 10021(690bdb80db59) timeout=10s
2023-05-20 08:05:21,340 WARN (statistics meta manager|23) [StatisticsMetaManager.refreshStatisticsTable():298] create statistics table table_statistic_v1 failed
2023-05-20 08:05:21,510 WARN (heartbeat-mgr-pool-3|122) [HeartbeatMgr$BackendHeartbeatHandler.call():337] backend heartbeat got exception, addr: 690bdb80db59:9050
org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
at org.apache.thrift.transport.TSocket.open(TSocket.java:226) ~[libthrift-0.13.0.jar:0.13.0]
at com.starrocks.common.GenericPool$ThriftClientFactory.create(GenericPool.java:144) ~[starrocks-fe.jar:?]
at com.starrocks.common.GenericPool$ThriftClientFactory.create(GenericPool.java:129) ~[starrocks-fe.jar:?]
at org.apache.commons.pool2.BaseKeyedPooledObjectFactory.makeObject(BaseKeyedPooledObjectFactory.java:62) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.create(GenericKeyedObjectPool.java:1036) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:356) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:278) ~[commons-pool2-2.3.jar:2.3]
at com.starrocks.common.GenericPool.borrowObject(GenericPool.java:101) ~[starrocks-fe.jar:?]
at com.starrocks.system.HeartbeatMgr$BackendHeartbeatHandler.call(HeartbeatMgr.java:289) [starrocks-fe.jar:?]
at com.starrocks.system.HeartbeatMgr$BackendHeartbeatHandler.call(HeartbeatMgr.java:275) [starrocks-fe.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:?]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) ~[?:?]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) ~[?:?]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) ~[?:?]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:?]
at java.net.Socket.connect(Socket.java:609) ~[?:?]
at org.apache.thrift.transport.TSocket.open(TSocket.java:221) ~[libthrift-0.13.0.jar:0.13.0]
... 13 more
2023-05-20 08:05:21,512 WARN (heartbeat mgr|32) [HeartbeatMgr.runAfterCatalogReady():177] get bad heartbeat response: type: BACKEND, status: BAD, msg: java.net.ConnectException: Connection refused (Connection refused)
2023-05-20 08:05:21,512 INFO (heartbeat mgr|32) [ComputeNode.handleHbResponse():501] Backend [id=10006, host=690bdb80db59, heartbeatPort=9050, alive=false] is dead due to exceed heartbeatRetryTimes
2023-05-20 08:05:21,512 INFO (heartbeat mgr|32) [CoordinatorMonitor.addDeadBackend():60] add backend 10006 to dead backend queue
2023-05-20 08:05:26,525 WARN (heartbeat-mgr-pool-5|132) [HeartbeatMgr$BackendHeartbeatHandler.call():337] backend heartbeat got exception, addr: 690bdb80db59:9050
org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused (Connection refused)
at org.apache.thrift.transport.TSocket.open(TSocket.java:226) ~[libthrift-0.13.0.jar:0.13.0]
at com.starrocks.common.GenericPool$ThriftClientFactory.create(GenericPool.java:144) ~[starrocks-fe.jar:?]
at com.starrocks.common.GenericPool$ThriftClientFactory.create(GenericPool.java:129) ~[starrocks-fe.jar:?]
at org.apache.commons.pool2.BaseKeyedPooledObjectFactory.makeObject(BaseKeyedPooledObjectFactory.java:62) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.create(GenericKeyedObjectPool.java:1036) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:356) ~[commons-pool2-2.3.jar:2.3]
at org.apache.commons.pool2.impl.GenericKeyedObjectPool.borrowObject(GenericKeyedObjectPool.java:278) ~[commons-pool2-2.3.jar:2.3]
at com.starrocks.common.GenericPool.borrowObject(GenericPool.java:101) ~[starrocks-fe.jar:?]
at com.starrocks.system.HeartbeatMgr$BackendHeartbeatHandler.call(HeartbeatMgr.java:289) [starrocks-fe.jar:?]
at com.starrocks.system.HeartbeatMgr$BackendHeartbeatHandler.call(HeartbeatMgr.java:275) [starrocks-fe.jar:?]
at java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) [?:?]
at java.lang.Thread.run(Thread.java:829) [?:?]
Caused by: java.net.ConnectException: Connection refused (Connection refused)
at java.net.PlainSocketImpl.socketConnect(Native Method) ~[?:?]
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:412) ~[?:?]
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:255) ~[?:?]
at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:237) ~[?:?]
at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) ~[?:?]
at java.net.Socket.connect(Socket.java:609) ~[?:?]
at org.apache.thrift.transport.TSocket.open(TSocket.java:221) ~[libthrift-0.13.0.jar:0.13.0]
... 13 more
2023-05-20 08:05:26,527 WARN (heartbeat mgr|32) [HeartbeatMgr.runAfterCatalogReady():177] get bad heartbeat response: type: BACKEND, status: BAD, msg: java.net.ConnectException: Connection refused (Connection refused)
2023-05-20 08:05:27,464 INFO (starrocks-mysql-nio I/O-1|108) [AcceptListener.handleEvent():70] Connection established. remote=/127.0.0.1:50820
2023-05-20 08:05:27,473 INFO (starrocks-mysql-nio-pool-1|149) [QeProcessorImpl.registerQuery():95] register query id = 14c398e7-f6e5-11ed-8cd2-0242ac110002
2023-05-20 08:05:27,474 WARN (starrocks-mysql-nio-pool-1|149) [CoordinatorPreprocessor.computeFragmentHosts():523] DataPartition UNPARTITIONED. Backend node not found. Check if any backend node is down.
2023-05-20 08:05:27,474 INFO (starrocks-mysql-nio-pool-1|149) [QeProcessorImpl.unregisterQuery():105] deregister query id 14c398e7-f6e5-11ed-8cd2-0242ac110002
2023-05-20 08:05:27,475 INFO (starrocks-mysql-nio-pool-1|149) [StmtExecutor.execute():553] execute Exception, sql: select @@version_comment limit 1, error: Backend node not found. Check if any backend node is down.backend: [690bdb80db59 alive: false inBlacklist: false]
2023-05-20 08:05:27,475 WARN (starrocks-mysql-nio-pool-1|149) [Coordinator.cancel():1361] cancel execution of query, this is outside invoke
2023-05-20 08:05:27,479 INFO (starrocks-mysql-nio-pool-1|149) [BackendsProcDir.getClusterBackendInfos():207] backends proc get tablet num cost: 0, total cost: 1
didn't notice that your CPU doesn't support SSE4.2
instruction as well, I pushed a new image lvlouisaslia/allin1-ubuntu:3.0.0-no-avx2-no-sse42
with SSE4.2 disabled. you can try again.
As the CPU doesn't support a few vectorizing instructions, you can refer to our document to build from source code, it can be sure that the built binary is compatible running on the host.
https://docs.starrocks.io/en-us/main/developers/build-starrocks/build_starrocks_on_ubuntu https://docs.starrocks.io/en-us/main/developers/build-starrocks/Build_in_docker
Now it works, thank you again!
can close this issue as done.
Steps to reproduce the behavior (Required)
sudo docker run -p 9030:9030 -p 8030:8030 -p 8040:8040 -it registry.starrocks.io/starrocks/allin1-ubuntu
Expected behavior (Required)
Running container, so it will be possible to be able to connect to it, following the document https://docs.starrocks.io/en-us/latest/quick_start/deploy_with_docker
Real behavior (Required)
or sometimes: