Closed dcramer closed 4 years ago
With tweaking random_page_cost, this returns in an acceptable period of time, albeit still slow:
Sort (cost=5336923.41..5337315.59 rows=156873 width=167) (actual time=15894.227..15895.152 rows=4381 loops=1) Sort Key: (avg(t.duration)) DESC Sort Method: quicksort Memory: 1359kB InitPlan 1 (returns $0) -> Index Scan using unq_repo_name on repository (cost=0.27..18.77 rows=1 width=16) (actual time=0.021..0.044 rows=1 loops=1) Index Cond: (((owner_name)::text = 'getsentry'::text) AND ((name)::text = 'sentry'::text)) -> GroupAggregate (cost=5312091.27..5316013.10 rows=156873 width=167) (actual time=13579.396..15890.345 rows=4381 loops=1) Group Key: t.hash, t.name -> Sort (cost=5312091.27..5312483.45 rows=156873 width=131) (actual time=13578.991..15239.979 rows=3163907 loops=1) Sort Key: t.hash, t.name Sort Method: external merge Disk: 465696kB -> Nested Loop (cost=1.13..5292425.73 rows=156873 width=131) (actual time=0.080..2314.057 rows=3163907 loops=1) -> Index Scan using idx_job_finished on job j (cost=0.43..443.73 rows=449 width=16) (actual time=0.062..9.326 rows=12109 loops=1) Index Cond: ((repository_id = $0) AND (status = 3) AND (date_finished > (now() - '14 days'::interval))) -> Index Scan using unq_testcase_hash on testcase t (cost=0.70..11668.60 rows=11755 width=147) (actual time=0.008..0.129 rows=261 loops=12109) Index Cond: (job_id = j.id) Planning time: 0.462 ms Execution time: 15990.365 ms (18 rows)
bors r+
Pull request successfully merged into master.
Build succeeded:
With tweaking random_page_cost, this returns in an acceptable period of time, albeit still slow:
bors r+