StarRocks / starrocks

The world's fastest open query engine for sub-second analytics both on and off the data lakehouse. With the flexibility to support nearly any scenario, StarRocks provides best-in-class performance for multi-dimensional analytics, real-time analytics, and ad-hoc queries. A Linux Foundation project.
https://starrocks.io
Apache License 2.0
8.87k stars 1.78k forks source link

query statement get ERROR:MaxMessageSize reached #52254

Open zxb2503 opened 3 days ago

zxb2503 commented 3 days ago

Steps to reproduce the behavior (Required)

  1. Use the SDM(starrocks-cluster-sync) tool to synchronize data from the shared-nothing cluster to the shared-data cluster.

  2. The data volume after synchronization is as follows: hdfs dfs -du -h /data 51.5 T 154.4 T /data/dcsr

  3. Start Compaction ,the config params is as follows: FE: lake_compaction_max_tasks=-1 lake_autovacuum_parallel_partitions=32 BE: compact_threads = 64 compact_thread_pool_queue_size = 1000 max_cumulative_compaction_num_singleton_deltas=100

  4. Execute the query statement. select from information_schema.partitions_meta pm where PARTITION_ID = '5054238'; ERROR 1064 (HY000): FE RPC failure, address=TNetworkAddress(hostname=10.xxx.xx.12, port=9020), reason=MaxMessageSize reached, host: unknown select count() from information_schema.partitions_meta pm ; ERROR 1064 (HY000): FE RPC failure, address=TNetworkAddress(hostname=10.xxx.xx.12, port=9020), reason=MaxMessageSize reached, host: unknown

StarRocks version (Required)

StarRocks 3.3.4-56bcf6f run_mode = shared_data 3fe/12be

xiangguangyxg commented 3 days ago

Your cluster has too many partitions, which exceed the RPC packet size limit.

zxb2503 commented 3 days ago

The number of partitions in the cluster is 1,039,578.