Closed 15767637775 closed 5 months ago
The next scan carries over the lower boundary from the previous one, so the number of scanned rows is similar to the "More efficient way".
SELECT LOW_PRIORITY `id` FROM `test`.`sbtest2` WHERE `id` >= 5426426 AND `id` < 6251583 AND `created_at` < FROM_UNIXTIME(1713338305) ORDER BY `id` ASC LIMIT 500;
SELECT LOW_PRIORITY `id` FROM `test`.`sbtest2` WHERE `id` >= 4602563 AND `id` < 5426426 AND `created_at` < FROM_UNIXTIME(1713338305) ORDER BY `id` ASC LIMIT 500;
SELECT LOW_PRIORITY `id` FROM `test`.`sbtest2` WHERE `id` > 4603062 AND `id` < 5426426 AND `created_at` < FROM_UNIXTIME(1713338305) ORDER BY `id` ASC LIMIT 500;
SELECT LOW_PRIORITY `id` FROM `test`.`sbtest2` WHERE `id` > 4603562 AND `id` < 5426426 AND `created_at` < FROM_UNIXTIME(1713338305) ORDER BY `id` ASC LIMIT 500;
SELECT LOW_PRIORITY `id` FROM `test`.`sbtest2` WHERE `id` > 4604062 AND `id` < 5426426 AND `created_at` < FROM_UNIXTIME(1713338305) ORDER BY `id` ASC LIMIT 500;
SELECT LOW_PRIORITY `id` FROM `test`.`sbtest2` WHERE `id` > 4604562 AND `id` < 5426426 AND `created_at` < FROM_UNIXTIME(1713338305) ORDER BY `id` ASC LIMIT 500;
SELECT LOW_PRIORITY `id` FROM `test`.`sbtest2` WHERE `id` > 4605062 AND `id` < 5426426 AND `created_at` < FROM_UNIXTIME(1713338305) ORDER BY `id` ASC LIMIT 500;
SELECT LOW_PRIORITY `id` FROM `test`.`sbtest2` WHERE `id` > 4605562 AND `id` < 5426426 AND `created_at` < FROM_UNIXTIME(1713338305) ORDER BY `id` ASC LIMIT 500;
Enhancement
version: 7.1.4 & 8.0.0 TTL SQL with the 'LIMIT' clause, as more and more MVCC versions are deleted in RocksDB, the scan speed decreases. TTL scan SQL:
By default, all regions of this table (row data) are split into 64 parts as uniformly as possible. Assuming the customer needs to archive a table of 6400 million rows every day, then each part still has 100 million rows. Then I tried testing the scanning performance of each part, this test should yield similar effect.
More efficient way: