Open lusterchris opened 5 days ago
10/30/2024
To optimize the aggregate strategy in PostgreSQL, especially when working with large datasets, you can use several techniques to reduce the time taken by the GroupAggregate
in your query. Here’s how:
In PostgreSQL, HashAggregate
is typically faster than GroupAggregate
, especially with large datasets. Forcing HashAggregate
can be effective, but it requires that the dataset fits in memory. If your data is too large, you can consider increasing the work_mem
setting temporarily.
work_mem
to allow the optimizer to use HashAggregate
:
SET work_mem = '500MB'; -- Adjust depending on available memory
Then, run your query to see if PostgreSQL switches to HashAggregate
.
work_mem
and maintenance_work_mem
help determine the amount of memory available for operations like sorting and hashing. You may want to increase these values, particularly if you see a lot of external disk sorting in your plan.
SET work_mem = '1GB'; -- or a suitable value based on your system’s RAM
SET maintenance_work_mem = '2GB'; -- for larger sorts and hash operations
Partitioning can break down the data into smaller subsets to speed up the aggregation by leveraging parallelism more effectively. Using Common Table Expressions (CTEs) or temporary tables is a way to handle intermediate aggregations.
WITH partitioned AS (
SELECT portfolio_uid, as_of_date, listing_uid,
value_usd, variation_margin_usd, accruals_usd, n_shares
FROM codex.am_positions
WHERE archive_timestamp IS NULL
)
SELECT pb.portfolio_uid,
cal.as_of_date,
p.listing_uid,
sum(p.value_usd) AS clean_mv_usd,
sum(p.accruals_usd) AS accruals_and_reclaims_usd,
sum(p.n_shares) AS qty
FROM onyx_api.portfolio_breakdowns pb
JOIN partitioned p ON pb.am_portfolio_uid = p.portfolio_uid
JOIN onyx_api.rollup_dates cal ON p.as_of_date = cal.as_of_date
GROUP BY pb.portfolio_uid, cal.as_of_date, p.listing_uid
Your current plan is using 7 parallel workers, but you might benefit from increasing this number if your hardware allows.
SET max_parallel_workers_per_gather = 8; -- Adjust based on CPU count
Since your query filters on archive_timestamp IS NULL
, creating a partial index can speed up index scans by reducing the amount of data scanned.
CREATE INDEX idx_am_position_revisions_partial
ON codex.am_positions (portfolio_uid, as_of_date)
WHERE archive_timestamp IS NULL;
After applying these changes, re-run EXPLAIN ANALYZE
to see the improvements in performance. These changes should reduce GroupAggregate
times and optimize memory use during aggregation.
In PostgreSQL, the index that will perform better depends on your query patterns and how frequently you query dates within the specified range.
Filtered Index: CREATE INDEX idx_pos_date_filtered ON onyx_api.position_rollups(as_of_date) WHERE as_of_date BETWEEN '2004-01-30' AND '2024-09-01';
2004-01-30
to 2024-09-01
), making it smaller in size compared to a full index. Queries that filter by this range should benefit, as the index is optimized for exactly that data subset.Full Index: CREATE INDEX idx_pos_date_filtered ON onyx_api.position_rollups(as_of_date);
as_of_date
query.'2004-01-30'
to '2024-09-01'
, it may not be as efficient as the filtered index.'2004-01-30'
to '2024-09-01'
, the filtered index is better. It will consume less space and offer better performance within that range.If both cases are relevant, you could use both indexes for optimized performance on queries within and outside of the specified date range:
CREATE INDEX idx_pos_date_filtered ON onyx_api.position_rollups(as_of_date)
WHERE as_of_date BETWEEN '2004-01-30' AND '2024-09-01';
CREATE INDEX idx_pos_date_full ON onyx_api.position_rollups(as_of_date);
This approach allows PostgreSQL to choose the best index depending on the query's date range, providing flexibility with minimal impact.
REBUILD INDEXES
-- For am_position_revisions SELECT indexname, indexdef FROM pg_indexes WHERE tablename = 'am_position_revisions' AND indexdef ILIKE '%listing_uid%' OR indexdef ILIKE '%as_of_date%' OR indexdef ILIKE '%portfolio_uid%';
-- For portfolio_breakdowns SELECT indexname, indexdef FROM pg_indexes WHERE tablename = 'portfolio_breakdowns' AND indexdef ILIKE '%am_portfolio_uid%' OR indexdef ILIKE '%effective_during%';
-- For rollup_dates SELECT indexname, indexdef FROM pg_indexes WHERE tablename = 'rollup_dates' AND indexdef ILIKE '%as_of_date%';
onyx=> \d+ am_position revisions
Partitioned table "codex.am_position_revisions
Column names
id as_of_date portfolio_uid listing_uid local_currency n_shares value_usd spendable_accruals_usd accruals_usd variation_margin_usd archive_timestamp etl_timestamp
Partition key: RANGE (as_of_date)
Indexes: "am_position_revisions_pkey" PRIMARY KEY, btree (id, as_of_date) "am_position revisions_listing_uid_aod_idx_when_valid" btree (listing_uid, as_of_date) INCLUDE (portfolio_uid) WHERE archive_timestamp IS NULL "am_position revisions_portfolio_uid_aod_idx_when_valid" btree (portfolio_uid, as_of_date) INCLUDE (listing_uid) WHERE = archive_timestamp IS NULL "am_position_revisions portfolio_uid_as_of_date_idx" btree (portfolio_uid, as_of_date) WHERE archive_timestamp IS NULL "am_position_revisions_unq_when_valid" UNIQUE, btree (as_of_date, portfolio_uid, listing_uid local_currency) WHERE archive_timestamp IS NULL "am_positions_as_of_date_portfolio_uid_idx" btree (as_of_date, portfolio_uid)
Check constraints: "am_position_revisions_check1" CHECK (n_shares IS NOT NULL OR as_of_date <= '2015-03-31'::date) Foreign-key constraints "am_position_revisions_as_of_date_fkey1" FOREIGN KEY (as_of_date) REFERENCES calendar_entities(calendar_date) ON UPDATE CASCADE ON DELETE CASCADE "am_position_revisions_listing_uid_fkey1" FOREIGN KEY (listing_uid) REFERENCES listing_entities(listing_uid) ON UPDATE CASCADE ON DELETE CASCADE "am_position_revisions_local_currency_fkey1" FOREIGN KEY (local_currency) REFERENCES currency_entities(iso_code) ON UPDATE CASCADE ON DELETE CASCADE "am_position_revisions_portfolio_uid_fkey1" FOREIGN KEY (portfolio_uid) REFERENCES am_portfolio_entities(portfolio_uid) ON UPDATE CASCADE ON DELETE CASCADE
Partitions: am_position_revisions_p2004 FOR VALUES FROM ('2004-01-01') TO ('2005-01-01 ) am_position_revisions_p2005 FOR VALUES FROM ("2005-01-01') TO ('2006-01-01"), am_position_revisions_p2006 FOR VALUES FROM ('2006-01-01') TO ('2007-01-01'), am_position_revisions_p2007 FOR VALUES FROM ('2007-01-01') TO ('2008-01-01'), am_position_revisions_P2008 FOR VALUES FROM ('2008-01-01') TO ('2009-01-01'), am_position_revisions_p2009 FOR VALUES FROM ('2009-01-01') TO ('2010-01-01'), am_position_revisions_P2010 FOR VALUES FROM ('2010-01-01') TO ('2011-01-01') am_position_revisions_p2011 FOR VALUES FROM ('2011-01-01') TO ('2012-01-01"), am_position_revisions_P2012 FOR VALUES FROM ("2012-01-01') TO ('2013-01-01'), am_position_revisions_p2013 FOR VALUES FROM ('2013-01-01") TO ('2014-01-01'), am_position_revisions_p2014 FOR VALUES FROM ('2014-01-01') TO ('2015-01-01'), am_position_revisions_p2015 FOR VALUES FROM ('2015-01-01') TO ('2016-01-01"), am_position_revisions_p2016 FOR VALUES FROM ('2016-01-01') TO ('2017-01-01'), am_position_revisions_p2017 FOR VALUES FROM ('2017-01-01') TO ('2018-01-01'), am_position_revisions_p2018 FOR VALUES FROM ('2018-01-01') TO ('2019-01-01'), am_position_revisions_p2019 FOR VALUES FROM ('2019-01-01') TO ('2020-01-01'), am_position_revisions_p2020 FOR VALUES FROM ('2020-01-01') TO ('2021-01-01'), am_position_revisions_p2021 FOR VALUES FROM ( '2021-01-01') TO ('2022-01-01'), am_position_revisions_p2022 FOR VALUES FROM ('2022-01-01') TO ('2023-01-01'), am_position_revisions_p2023 FOR VALUES FROM ('2023-01-01') TO ('2024-01-01'), am_position_revisions_p2024 FOR VALUES FROM ('2024-01-01') TO ('2025-01-01'), am_position_revisions_p2025 FOR VALUES FROM ('2025-01-01') TO ('2026-01-01'), am_position_revisions_p2026 FOR VALUES FROM ('2026-01-01') TO ('2027-01-01'), am_position_revisions_p2027 FOR VALUES FROM ('2027-01-01') TO ('2028-01-01'), am_position_revisions_default DEFAULT
In PostgreSQL, indexes on partitioned tables don’t automatically propagate to the individual partitions. This means that if you create an index on the parent partitioned table, it doesn’t physically index any data; rather, it acts as a template. Here’s why it’s important for each partition to have its own indexes:
Direct Data Access: In a partitioned setup, each partition stores a subset of the data based on partition criteria (like date
ranges here). Indexes on individual partitions enable direct, fast access to relevant data within that subset, enhancing query performance.
Efficiency in Query Execution: When a query involves specific partitions, PostgreSQL will only scan those partitions rather than the entire table. Without indexes on each partition, even partition-specific queries would require full table or partition scans, which are slow and resource-intensive for large tables.
Parallel Processing and Maintenance: PostgreSQL can parallelize index maintenance (e.g., reindexing) at the partition level. This flexibility allows for optimized index creation and maintenance based on partition needs, which reduces the load on the system compared to a single large index across all data.
Partition-Pruning Support: With partitioned tables, PostgreSQL uses partition-pruning, where it excludes irrelevant partitions from queries. Indexes on individual partitions enhance this pruning, as the database quickly locates relevant partitions without scanning non-relevant data.
Reduced Disk I/O and Memory Use: Smaller indexes on individual partitions generally consume less memory and require fewer I/O operations compared to one large index on the whole dataset, especially as partitions grow. This helps to keep index operations efficient and scalable.
Each partitioned index directly supports the PostgreSQL optimizer in finding data quickly, making per-partition indexes critical for effective performance in partitioned tables.
To optimize indexing for this partitioned table (am_position_revisions
) and speed up the query, we'll focus on the main columns involved in joins, filters, and sorting: as_of_date
, portfolio_uid
, and listing_uid
.
Here's the recommended indexing strategy with commands for each partition:
Primary Index on Join Columns: Since portfolio_uid
and listing_uid
are part of the join conditions with portfolio_breakdowns
, indexing these columns should help reduce join costs.
Filter Index on archive_timestamp IS NULL
: Add a partial index on archive_timestamp
for each partition, focusing on rows where archive_timestamp IS NULL
, as the query excludes archived data.
Multi-Column Index for Grouping and Sorting: An index on (portfolio_uid
, as_of_date
, listing_uid
) will help with grouping and sorting. As sorting is done on large data, using a multi-column index on all three fields in each partition can improve performance.
For each partition of am_position_revisions
(e.g., am_position_revisions_p2007
, am_position_revisions_p2008
, etc.), apply the following indexes. Adjust partition_name
in each command below:
-- 1. Index on join columns for each partition
CREATE INDEX ON codex.am_position_revisions_partition_name (portfolio_uid, listing_uid);
-- 2. Partial index for filtering based on archive_timestamp
CREATE INDEX ON codex.am_position_revisions_partition_name (archive_timestamp)
WHERE archive_timestamp IS NULL;
-- 3. Multi-column index for grouping and sorting
CREATE INDEX ON codex.am_position_revisions_partition_name (portfolio_uid, as_of_date, listing_uid);
Repeat these commands for each partition to ensure consistent indexing across all am_position_revisions
partitions. This approach should help the query optimizer reduce join and sort costs significantly, making query execution faster.
Creating indexes for partitions and then attaching them to a parent index serves specific purposes in PostgreSQL, especially in the context of partitioned tables. Here’s a brief explanation of why both steps are necessary:
When you create an index on a partitioned table, it’s typically done locally for each partition. This means that:
After creating local indexes on each partition, you can attach them to the parent index. This has several advantages:
In your provided commands, you:
CREATE INDEX CONCURRENTLY
, allowing the database to continue processing other transactions while the indexes were built.ALTER INDEX ... ATTACH PARTITION
, consolidating the indexing strategy under the parent index.This approach maximizes the efficiency of data access and maintenance in a partitioned table structure, ensuring that both individual partitions are optimized for their specific data while also benefiting from the organizational advantages of a parent index.
The choice of indexes largely depends on your specific query patterns and how the data is accessed. Here’s a breakdown of the suggested indexes and their potential benefits:
CREATE INDEX ON codex.am_position_revisions_partition_name (portfolio_uid, listing_uid);
portfolio_uid
and listing_uid
, allowing for quick access to the relevant records in each partition.archive_timestamp
CREATE INDEX ON codex.am_position_revisions_partition_name (archive_timestamp)
WHERE archive_timestamp IS NULL;
archive_timestamp
is NULL
.CREATE INDEX ON codex.am_position_revisions_partition_name (portfolio_uid, as_of_date, listing_uid);
portfolio_uid
and as_of_date
, and then access listing_uid
. It can help improve the performance of queries that aggregate data by these fields.portfolio_uid
and listing_uid
, the first index is essential.portfolio_uid
, as_of_date
, and listing_uid
, the third multi-column index will provide the best performance.In most scenarios, you may find that a combination of these indexes provides the best performance, as they target different aspects of your queries.
If you had to prioritize based on common usage:
After implementing these indexes, it’s crucial to monitor query performance using EXPLAIN ANALYZE
. This will help you determine if the expected improvements are achieved and if any additional indexes are necessary.
INDEXING A PARTITIONED TABLE
-- Create index on individual partition CREATE INDEX CONCURRENTLY sqlt_data_pt_sys9946980103_idx_tagid ON sqlt_data_pt_sys9946980103 (tagid);
-- Create parent index on partitioned table CREATE INDEX sqlt_data_pt_idx_tagid ON ONLY sqlt_data_pt (tagid);
-- Attach the partition index to the parent index ALTER INDEX sqlt_data_pt_idx_tagid ATTACH PARTITION sqlt_data_pt_sys9946980103_idx_tagid;
IDENTIFY ALL PARTITION INDEXES ATTACHED TO THE PARENT INDEX
SELECT parent_idx.relname AS parent_index, child_idx.relname AS partition_index, part_table.relname AS partition_table FROM pg_index AS parent_idx_data JOIN pg_class AS parent_idx ON parent_idx.oid = parent_idx_data.indexrelid JOIN pg_inherits AS inh ON inh.inhparent = parent_idx_data.indexrelid JOIN pg_class AS child_idx ON child_idx.oid = inh.inhrelid JOIN pg_class AS part_table ON part_table.oid = inh.inhrelid WHERE parent_idx.relname = 'sqlt_data_pt_idx_tagid'; -- Replace with your parent index name
SELECT parent_idx.relname AS parent_index, child_idx.relname AS partition_index, child_table.relname AS partition_table FROM pg_class parent_idx JOIN pg_index parent_index ON parent_index.indexrelid = parent_idx.oid JOIN pg_inherits inh ON inh.inhparent = parent_idx.oid JOIN pg_class child_idx ON child_idx.oid = inh.inhrelid JOIN pg_index child_index ON child_index.indexrelid = child_idx.oid JOIN pg_class child_table ON child_table.oid = child_index.indrelid WHERE parent_idx.relname = 'sqlt_data_pt_idx_tagid'; -- Replace with your parent index name
QUERY PLAN GroupAggregate(cost=51072030.94..52037486.98 rows=5751653 width-180) (actual time=652121.695..1478877.123 rows=311317571 loops=1) Group Key: pb.portfolio_uid, cal.as_of_date, am_position_revisions.listing_uid ->Gather Merge(cost=51072030.94..51778662.60 rows-5751653 width-43) (actual time-652121.661..787273.029 rows=592364637 loops=1) Workers Planned: .. 7 Workers Launched: 7 ->Sort(cost=51071030.82..51073084.99 rows=821665 width=43) (actual time=290870.755..304738.848 rows=74045580 loops=8) Sort Key: pb.portfolio_uid, cal.as_of_date, am_position_revisions. listing_uid Sort Method: external mergeDisk: 3369032kB Worker 0:Sort Method: external mergeDisk: 2769872kB Worker 1:Sort Method: external mergeDisk: 2228112kB Worker 2:Sort Method: external mergeDisk: 2042960kB Worker 3:Sort Method: external mergeDisk: 14754000kB Worker 4:Sort Method: external mergeDisk: 3153688kB Worker 5:Sort Method: external mergeDisk: 2949096kB Worker 6:Sort Method: external mergeDisk: 2792616kB Merge Cond: (am_position_revisions.portfolio_uid = pb.am_portfolio_uid) ->Merge Join(cost=36515253.22..50990309.67 rows=821665 width=43) (actual time=115259.189..182258.497 rows=74045580 loops=8) Join Filter: (am_position_revisions.as_of_date <@ pb.effective_during) Rows Removed by Join Filter: 136798504
->Sort (cost=36417945.03..36465915.94 rows=19188364 width=47) (actual time=114956.625..118157.185 rows=18097395 loops=8) Sort Key: am_position_revisions.portfolio_uid Sort Method: external mergeDisk: 758280kB Worker 0:Sort Method: external mergeDisk: 738696kB Worker 1:Sort Method: external mergeDisk: 776120kB Worker 2:Sort Method: external mergeDisk: 741632kB Worker 3:Sort Method: external mergeDisk: 3521168kB Worker 4:Sort Method: external mergeDisk: 738592kB Worker 5:Sort Method: external mergeDisk: 829600kB Worker 6:Sort Method: external mergeDisk: 743000kB ->Hash Join(cost=9.52..33734161.50 rows=19188364 width=47) (actual time=2680.032..96716.910 rows=18097395 loops=8) Hash Cond: (am_position_revisions.as_of_date = cal.as_of_date) ->Parallel Append(cost=0.00..32937554.82 rows=303222107 width=43) (actual time=0.420..74214.076 rows=265291575 loops=8) ->Parallel Seq Scan on am_position _revisions_p2007 am_position_revisions_4(cost=0.00..2060273.97 rows=19967097 width=42) (actual time=1.184..27682.958 rows=139759388 loops=1) Filter: (archive_timestamp IS NULL)
->Parallel Seq Scan on am_position_revisions_p2005 am_position_revisions_2(cost=0.00. 1985884.69 rows=19260969 width=42) (actual time-0.761..26596.040 rows=134823174 loops=1) Filter: (archive_timestamp IS NULL) ->Parallel Seq Scan on am_position_revisions_p2006 am_position_revisions_3(cost=0.00. 1956939.06 rows=18976906 width=42) (actual time-0.012..26182.144 rows=132842780 loops=1) Filter: (archive_timestamp IS NULL) ->Parallel Seq Scan on am_position_revisions_p2008 am_position_revisions_5(cost=0.00. 1947924.92 rows=18907192 width=43) (actual time=0.010..26364.872 rows=132371065 loops=1) Filter: (archive_timestamp IS NULL) ->Parallel Seq Scan on am_position _revisions_p2023 am_position_revisions_20(cost-0.00. 1829544.05 rows=17470805 width=44) (actual time=0.010..25058.555 rows=122332025 loops=1) Filter: (archive_timestamp IS NULL) Rows Removed by Filter: 112 ->Parallel Seq Scan on am_position_revisions_p2024 am_position_revisions_21(cost=0.00. 1721490.05 rows=15920782 width=44) (actual time=0.012..24407.512 rows=111189759 loops=1) Filter: (archive_timestamp IS NULL) Rows Removed by Filter: 3660343 ->Parallel Seq Scan on am_position_revisions_p2009 am_position_revisions 6(cost=0.00. 1647335.06 rows=16055606 width=42) (actual time=0.010..22160.399 rows=112377254 loops=1) Filter: (archive_timestamp IS NULL) ->Parallel Seq Scan on am_position_revisions_p2004 am position_revisions_1(cost=0.00. 1559262.11 rows=15096611 width=44) (actual time=1.198..22711.673 rows=105678865 loops=1) Filter: (archive_timestamp IS NULL) ->Parallel Seq Scan on am_position_revisions_p2010 am_position_revisions_7(cost=0.00. 1539536.31 rows=14966331 width=42) (actual time=0.791..22021.808 rows=104773689 loops=1) Filter: (archive_timestamp IS NULL)
->Parallel Seq Scan on am_position_revisions_p2021 am_position_revisions_18(cost=0.00..1488974. 02 rows=14285102 width=44) (actual time=0.015..19790.290 rows=99992077 loops=1) Filter: (archive_timestamp IS NULL) ->Parallel Seq Scan on am_position_revisions p2022 am _position_revisions_19(cost=0.00..1470373.01 rows=14051101 width=44) (actual time=0.006..19595.909 rows=98365822 loops=1) Filter: (archive_timestamp IS NULL) Rows Removed by Filter: 20 ->Parallel Seq Scan on am_position_revisions_p2011 am_position_revisions_8(cost=0.00..1467212.73 rows=14227173 width=43) (actual time-0.009..19788.778 rows=99587785 loops=1) Filter: (archive_timestamp IS NULL) ->Parallel Seq Scan on am_position_revisions_p2020 am_position _revisions 17(cost=0.00..1390468.55 rows=13424955 width=42) (actual time=0.006..9468.738 rows=46978620 loops=2) Filter: (archive_timestamp IS NULL) ->Parallel Seq Scan on am position revisions p2012 am position_revisions_9(cost-0.00..1295685.70 rows=12545970 width=43) (actual time=0.450..9038.786 rows=43911946 loops=2) Filter: (archive_timestamp IS NULL) ->Parallel Seq Scan on am_position_revisions_p2018 am_position_revisions_15(cost-0.00..1294717.78 rows=12545278 width=42) (actual time=0.275..5859.203 rows=29273184 loops=3) Filter: (archive_timestamp IS NULL) ->Parallel Seq Scan on am_position_revisions_p2019 am_position_revisions_16(cost=0.00..1284976.35 rows=12414535 width=42) (actual time=0.005..2203.426 rows=10864182 loops=8) Filter: (archive_timestamp IS NULL)
->Parallel Seq Scan on am_position_revisions_p2017 am_position revisions_14(cost=0.00. 1152376.73 rows=11191173 width=42) (actual time=0.005..3874.176 rows=19582838 loops=4) Filter: (archive_timestamp IS NULL) ->Parallel Seq Scan on am_position_revisions_p2013 am_position_revisions_10(cost=0.00. 1147162.10 rows=11097710 width=43) (actual time=0.355..7657.086 rows=38841746 loops=2) Filter: (archive timestamp IS NULL) ->Parallel Seq Scan on am_position_revisions_p2016 am_position_revisions_13(cost=0.00. 1115312.85 rows=10813685 width=42) (actual time=0.005..14874.692 rows=75689605 loops=1) Filter: (archive_timestamp IS NULL) ->Parallel Seq Scan on am_position_revisions_p2014 am_position_revisions_11(cost=0.00. 1070447.48 rows=10345448 width=43) (actual time-0.854.14467.716 rows=72430645 loops=1) Filter: (archive_timestamp IS NULL) ->Parallel Seq Scan on am_position_revisions_p2015 am_position_revisions_12(cost=0.00. 995546.78 E rows=9657678 width=42) (actual time=1.356..13222.799 rows=67589685 loops=1) Filter: (archive_timestamp IS NULL) ->Parallel Seq Scan on am_position_revisions_p2025 am_position_revisions_22(cost-0.00..0.00 rows=1 width=140) (actual time=0.000..0.000 rows=0 loops=1) Filter: (archive_timestamp IS NULL) ->Parallel Seq Scan on am _position _revisions_p2026 am_position revisions 23(cost=0.00..0.00 rows=1 width=140) (actual time=0.000..0.000 rows=0 loops=1) Filter: (archive timestamp IS NULL) ->Parallel Seq Scan on am_position_revisions_p2027 am position_revisions_24(cost=0.00..0.00 rows=1 width=140) (actual time=0.000..0.000 rows=0 loops=1) Filter: (archive_timestamp IS NULL) ->Parallel Seq Scan on am_position_revisions_default amposition revisions_25(cost=0.00..0.e 00 rows=1 width=140) (actual time=0.001..0.001 rows-0 loops-1) Filter: (archive_timestamp IS NULL)
->Hash(cost=5.34..5.34 rows=334 width=4) (actual time=0.142..0.142 rows=334 loops=8) Buckets: 1024Batches: 1Memory Usage: 20kB ->Seq Scan on rollup_dates cal(cost=0.00..5.34 rows=334 width=4) (actual time=0.062..0.086 rows=334 loops=8) ->Sort(cost=97299.40..99343.64 rows=817694 width=29) (actual time=299.175..11703.134 rows=211066143 loops=8) Sort Key: pb.am_portfoliouid Sort Method: quicksortMemory: 69294kB Worker 0:Sort Method: quicksortMemory: 69294kB Worker 1:Sort Method: quicksortMemory: 69294kB Worker 2:Sort Method: quicksortMemory: 69294kB Worker 3:Sort Method: quicksortMemory: 69294kB Worker 4:Sort Method: quicksortMemory: 69294kB Worker 5:Sort Method: quicksortMemory: 69294kB Worker 6:Sort Method: quicksortMemory: 69294kB ->Seq Scan on portfolio breakdowns pb(cost=0.00..16996.94 rows=817694 width=29) (actual time=0.060..125.964 rows=817694 loops=8) Planning Time: 15.487 ms Execution Time: 1493299.361 ms