timescale / timescaledb

An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.
https://www.timescale.com/
Other
17.89k stars 882 forks source link

Contiguous Aggregate Performance #2841

Open zeroallox opened 3 years ago

zeroallox commented 3 years ago

Relevant system information:

Hello,

We have been experiencing performance issues with contiguous aggregates during both initial generation and subsequent queries on the materialized view.

The CAGG was created with the following:

CREATE MATERIALIZED VIEW market_data.cagg_equity_trades_min
            WITH
            (timescaledb.continuous)
AS
SELECT time_bucket('60000000000'::bigint, src_timestamp) AS ns_window,
       ticker,
       first(price, src_timestamp)                       AS open,
       max(price)                                        AS high,
       min(price)                                        AS low,
       last(price, src_timestamp)                        AS close,
       sum(volume)                                       AS volume,
       count(*)                                          as count
FROM market_data.equity_trades
where not array [1001]::bigint[] && flags
GROUP BY ns_window, ticker
WITH NO DATA;

I would be more than happy to provide access to the current database or set up a new one for troubleshooting.

Thanks for your time!

erimatnor commented 3 years ago

There's a known performance issue when querying continuous aggregates which will be fixed in the next release (2.0.1). PR with the fix is here: https://github.com/timescale/timescaledb/pull/2828. This sounds like a duplicate of that. Can you try to disable real-time aggregation to see if that is a work-around for you? Should be possible with something like:

ALTER MATERIALIZED VIEW market_data.cagg_equity_trades_min SET (timescaledb.materialized_only=true);

W.r.t. multiple refresh jobs on a single continuous aggregate; we don't support it right now, but might do it in the future. The same goes for parallel/multi-core support.

NunoFilipeSantos commented 3 years ago

Hi @zeroallox did you have a chance to try @erimatnor suggestion?

zeroallox commented 3 years ago

Hi @zeroallox did you have a chance to try @erimatnor suggestion?

Yes and it had no effect. I have even dropped the entire database and recreated everything from scratch, not implementing any sort of scheduling or automatic CAGG jobs. When i ran the job manually, the generation happened, albeit very slowly.

I let it run for about a day, restarted the DB, and still had slow queries. Again running them manually (building the data set on the fly) was fine.

(Excuse the late reply and thank you for your time!)

Kazmirchuk commented 3 years ago

I had a similar issue with CAGG performance - initial aggregation job was processing only ~5k rows/sec which is unacceptably slow and basically makes CAGG unusable for any realistic dataset (I have 1.7 billion rows in my time series), while creating a classic PostgreSQL materialized view took only ~15 minutes, and I have only 8 vCPUs, not 32 like the topic starter. If there is a way to convert a PG mat.view with data into CAGG, please let me know.

zeroallox commented 3 years ago

Just upgraded to the new version today.

Docker image name: timescale/timescaledb:2.0.1-pg12

Same issue :(

I ran the CAGG job for about an hour, then deleted it. When i attempt to select even a single row the query takes over 20 mins. No other tasks are running, the DB is completely idle.

zeroallox commented 3 years ago

Giving TSDB another whirl after a few months using v2.4.0 in Docker image timescale/timescaledb:latest-pg13.

There does not appear to be any improvement: The CAGG generation only utilizes a single core of a 32 core system. :(

Has there been any updates or internal discussion about multi thread support for CAGG generation?

mfundul commented 3 years ago

There have been internal discussions about optimizations for CAGGs but there are no updates as of yet. It is likely that the upcoming changes for CAGGs will break compatibility and you should not expect linear scaling since the Continuous aggregate materialization process is not an embarrassingly parallel workload.

zeroallox commented 3 years ago

Understood!

What do you mean by "break compatibility"? Will one need to regenerate the CAGG MV's again? Or just that the user facing function calls / params are changing?

NP either way, just want to keep an eye out.

In any case, thank you so much for taking the time to get back to me.

mfundul commented 3 years ago

It would probably mean that there would be a new version of CAGGs, e.g. CAGG v2. You would probably be able to use v2 for new data, and you would need to migrate from v1 to v2 for existing CAGGs, to take advantage of new features, with the help of a tool.

bendoerr commented 2 years ago

Would absolutely love to see performance improvements for the refresh job. For example say we have a refresh that covers 3 days, simple aggregate with a sum, count, avg, min, and max. If we run the refresh for that whole period, it'll take over 3.5 hours. But if we run 3 seperate refreshes, 1 for each day, we can complete the whole period in about an hour with each running for 20 minutes. This is unintuitive.

yaitskov commented 2 years ago

Hi. I experiment with latest timescale db (docker 2.6) and I noticed slow performance on weather_metrics demo db. Laptop is good (icore7 and 16gb ram).

# select create_hypertable('weather_metrics', 'time');
# CREATE TABLE IF NOT EXISTS weather_metrics (

   time TIMESTAMP WITHOUT TIME ZONE NOT NULL,
   timezone_shift int NULL,
   city_name text NULL,
   temp_c double PRECISION NULL,
   feels_like_c double PRECISION NULL,
   temp_min_c double PRECISION NULL,
   temp_max_c double PRECISION NULL,
   pressure_hpa double PRECISION NULL,
   humidity_percent double PRECISION NULL,
   wind_speed_ms double PRECISION NULL,
   wind_deg int NULL,
   rain_1h_mm double PRECISION NULL,
   rain_3h_mm double PRECISION NULL,
   snow_1h_mm double PRECISION NULL,
   snow_3h_mm double PRECISION NULL,
   clouds_percent int NULL,
   weather_type_id int NULL
);
# \copy weather_metrics (time, timezone_shift, city_name, temp_c, feels_like_c, temp_min_c, temp_max_c, pressure_hpa, humidity_percent, wind_speed_ms, wind_deg, rain_1h_mm, rain_3h_mm, snow_1h_mm, snow_3h_mm, clouds_percent, weather_type_id) from './weather_data.csv' CSV HEADER;
# create materialized view wmd with (timescaledb.continuous) as select time_bucket('1 day', time) as     bucket, city_name, avg(temp_c) as avg_temp, avg(feels_like_c) as feels_like_temp,
    max(temp_c) as max_temp,
    min(temp_c) as min_temp,
    avg(pressure_hpa) as pressure,
    avg(humidity_percent) as humidity_percent,
    avg(rain_3h_mm) as rain_3h,
    avg(snow_3h_mm) as snow_3h,
    avg(wind_speed_ms) as wind_speed,
    avg(clouds_percent) as clouds from weather_metrics group by bucket, city_name with no data;

# select * from wmd where bucket between (now() - interval '4 year 1 day') and (now() - interval '4 year') and city_name = 'Lisbon' ;

 select * from wmd where bucket between (now() - interval '4 year 1 day') and (now() - interval '4 year') and city_name = 'Lisbon' ;
       bucket        | city_name |      avg_temp      |  feels_like_temp   | max_temp | min_temp | pressure | humidity_percent | rain_3h | snow_3h |    wind_speed     |       clouds        
---------------------+-----------+--------------------+--------------------+----------+----------+----------+------------------+---------+---------+-------------------+---------------------
 2018-05-03 00:00:00 | Lisbon    | 14.709999999999999 | 10.894999999999998 |    18.96 |    11.53 |  1017.25 |           68.375 |         |         | 5.083333333333333 | 20.7916666666666667
(1 row)

Time: 11644.851 ms (00:11.645)

# SELECT extversion FROM pg_extension where extname = 'timescaledb';
 extversion 
------------
 2.6.1

# select version();
                                                             version                                                             
---------------------------------------------------------------------------------------------------------------------------------
 PostgreSQL 14.2 (Ubuntu 14.2-1.pgdg22.04+1+b1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.2.0-16ubuntu1) 11.2.0, 64-bit

# select count(*) from weather_metrics;
  count  
---------
 3794998
Time: 401.276 ms

# select count(*) from wmd;
 count  
--------
 154580
(1 row)

Time: 6916.436 ms (00:06.916)

#  select * from wm where time between (now() - interval '4 year 1 day') and (now() - interval '4 year') and city_name = 'Lisbon' ;
# Time: 251.027 ms

Querying hyper table is 47 times faster than continuous aggregation view, meanwhile aggregating table is 24 times smaller!

yaitskov commented 2 years ago

materialized view created with WITH DATA performance better but result is not great 1 second to fetch 1 row from local server:

# select * from wmd2 where bucket between (now() - interval '4 year 1 day') and (now() - interval '4 year') and city_name = 'Lisbon' ;
       bucket        | city_name | avg_temp |  feels_like_temp   | max_temp | min_temp | pressure | humidity_percent | rain_3h | snow_3h |    wind_speed     |       clouds        
---------------------+-----------+----------+--------------------+----------+----------+----------+------------------+---------+---------+-------------------+---------------------
 2018-05-03 00:00:00 | Lisbon    |    14.71 | 10.895000000000001 |    18.96 |    11.53 |  1017.25 |           68.375 |         |         | 5.083333333333333 | 20.7916666666666667
(1 row)

Time: 955.893 ms

I think No DATA cancels materialized view effect.

Aggregated view copied into regular table is 10 faster.

# create table wmd23 as select * from wmd;
#  select * from wmd23 where bucket between (now() - interval '4 year 1 day') and (now() - interval '4 year') and city_name = 'Lisbon' ;
       bucket        | city_name | avg_temp |  feels_like_temp   | max_temp | min_temp | pressure | humidity_percent | rain_3h | snow_3h |    wind_speed     |       clouds        
---------------------+-----------+----------+--------------------+----------+----------+----------+------------------+---------+---------+-------------------+---------------------
 2018-05-03 00:00:00 | Lisbon    |    14.71 | 10.895000000000001 |    18.96 |    11.53 |  1017.25 |           68.375 |         |         | 5.083333333333333 | 20.7916666666666667
(1 row)

Time: 39.381 ms

Turning that table into hypertable one slows down:

# select create_hypertable('wmd23', 'bucket', migrate_data => true);
#  select * from wmd23 where bucket between (now() - interval '4 year 1 day') and (now() - interval '4 year') and city_name = 'Lisbon' ;
       bucket        | city_name | avg_temp |  feels_like_temp   | max_temp | min_temp | pressure | humidity_percent | rain_3h | snow_3h |    wind_speed     |       clouds        
---------------------+-----------+----------+--------------------+----------+----------+----------+------------------+---------+---------+-------------------+---------------------
 2018-05-03 00:00:00 | Lisbon    |    14.71 | 10.895000000000001 |    18.96 |    11.53 |  1017.25 |           68.375 |         |         | 5.083333333333333 | 20.7916666666666667
(1 row)

Time: 633.680 ms

Regular index on copied aggregate table (without hyper one) makes the same query super fast:

# create index wmd_3_bucket_btree on wmd_3 (bucket);
#  select * from wmd_3 where bucket between (now() - interval '4 year 1 day') and (now() - interval '4 year') and city_name = 'Lisbon' ;
       bucket        | city_name |      avg_temp      |  feels_like_temp   | max_temp | min_temp | pressure | humidity_percent | rain_3h | snow_3h |    wind_speed     |       clouds        
---------------------+-----------+--------------------+--------------------+----------+----------+----------+------------------+---------+---------+-------------------+---------------------
 2018-05-03 00:00:00 | Lisbon    | 14.709999999999999 | 10.894999999999998 |    18.96 |    11.53 |  1017.25 |           68.375 |         |         | 5.083333333333333 | 20.7916666666666667
(1 row)

Time: 1.659 ms
jfjoly commented 2 years ago

Thanks for the detailed experiments!

Performances of continuous aggregates have been improved in TimescaleDB 2.7. You may want to test again with the latest version.

zeroallox commented 2 years ago

Results feel about the same as before. Running the raw query and doing the agg on the fly is many times faster than using the mat view.

mfreed commented 1 year ago

@zeroallox Can you post any of the results from one of the latest versions of TimescaleDB? (And please make sure you actually upgrade/migrate your cagg to the new format, if you aren't creating from scratch.)

https://docs.timescale.com/timescaledb/latest/how-to-guides/continuous-aggregates/migrate/

jvanns commented 1 year ago

I'm assuming then that CAGGs don't take advantage of the parallel worker refreshes of MVIEWs that PG14 introduced? https://www.postgresql.org/about/news/postgresql-14-beta-1-released-2213/

jvanns commented 1 year ago

I'm getting increasingly concerned that all these 'slow' CAGG issues are related, somehow! Issue #6286 smells the same as the performance degradation raised here. I too have written alternatives to CAGGs using plain ol' aggregation jobs into hypertables rather than CAGGs and their performance is an order of magnitude better! And this is in late 2023 - a year since this ticket was last discussed.

jnidzwetzki commented 11 months ago

Hello @jvanns,

We are working on improving CAggs with every version of TimescaleDB. However, we are aware that the current implementation has areas for improvement regarding performance for certain configurations and workloads.

This issue is about using parallel workers for the refresh. #6286 looks more like a problem with missing chunk exclusion when no/certain order by values are used in the CAgg definition. We are currently working on improving the planning times for large real-time CAggs. Additionally, we will address other performance-related issues in the upcoming quarters.

Best regards Jan