Without this index, the dequeuing results in an external merge sort on these three columns,
which can be very costly if there is a large backlog.
With about 100.000 rows in the transport table, this is the query plan before adding the index:
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------
Delete on rebus_transport (cost=13658.82..17717.99 rows=1 width=6) (actual time=70.636..73.416 rows=1 loops=1)
InitPlan 1 (returns $1)
-> Limit (cost=13658.39..13658.41 rows=1 width=22) (actual time=70.267..70.267 rows=1 loops=1)
-> LockRows (cost=13658.39..13798.23 rows=11187 width=22) (actual time=70.267..70.267 rows=1 loops=1)
-> Sort (cost=13658.39..13686.36 rows=11187 width=22) (actual time=70.249..70.249 rows=1 loops=1)
Sort Key: rebus_transport_1.priority DESC, rebus_transport_1.visible, rebus_transport_1.id
Sort Method: external merge Disk: 3160kB
-> Seq Scan on rebus_transport rebus_transport_1 (cost=0.00..13602.46 rows=11187 width=22) (actual time=0.011..26.401 rows=100686 loops=1)
Filter: ((recipient = 'messages'::text) AND (visible < clock_timestamp()) AND (expiration > clock_timestamp()))
-> Index Scan using rebus_transport_pkey on rebus_transport (cost=0.42..4059.58 rows=1 width=6) (actual time=70.630..73.409 rows=1 loops=1)
Index Cond: (id = $1)
Planning Time: 0.114 ms
Execution Time: 74.751 ms
(13 rows)
After adding the index, results are much better:
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Delete on rebus_transport (cost=3.62..2766.76 rows=1 width=6) (actual time=0.048..1.647 rows=1 loops=1)
InitPlan 1 (returns $2)
-> Limit (cost=0.42..3.20 rows=1 width=22) (actual time=0.034..0.034 rows=1 loops=1)
-> LockRows (cost=0.42..31103.62 rows=11187 width=22) (actual time=0.033..0.033 rows=1 loops=1)
-> Index Scan using idx_dequeue_rebus_transport on rebus_transport rebus_transport_1 (cost=0.42..30991.75 rows=11187 width=22) (actual time=0.012..0.012 rows=1 loops=1)
Filter: ((recipient = 'messages'::text) AND (visible < clock_timestamp()) AND (expiration > clock_timestamp()))
-> Index Scan using idx_dequeue_rebus_transport on rebus_transport (cost=0.42..2763.57 rows=1 width=6) (actual time=0.041..1.639 rows=1 loops=1)
Index Cond: (id = $2)
Planning Time: 0.135 ms
Execution Time: 1.670 ms
(10 rows)
Rebus is MIT-licensed. The code submitted in this pull request needs to carry the MIT license too. By leaving this text in, I hereby acknowledge that the code submitted in the pull request has the MIT license and can be merged with the Rebus codebase.
Without this index, the dequeuing results in an external merge sort on these three columns, which can be very costly if there is a large backlog.
With about 100.000 rows in the transport table, this is the query plan before adding the index:
After adding the index, results are much better:
Rebus is MIT-licensed. The code submitted in this pull request needs to carry the MIT license too. By leaving this text in, I hereby acknowledge that the code submitted in the pull request has the MIT license and can be merged with the Rebus codebase.