timescale / timescaledb

An open-source time-series SQL database optimized for fast ingest and complex queries. Packaged as a PostgreSQL extension.
https://www.timescale.com/
Other
17.46k stars 875 forks source link

Longer chunk_time_intervals are capped at 497,664,000,000 milliseconds #2747

Open tylerfontaine opened 3 years ago

tylerfontaine commented 3 years ago

Relevant system information:

Describe the bug When creating hypertables with chunk_time_intervals of > 497,664,000,000 milliseconds (which is 15 years, 8 months, and some change), the intervals are actually capped at 497,664,000,000 milliseconds.

To Reproduce

create table testinterval (time timestamp, val int);

select create_hypertable('testinterval', 'time', chunk_time_interval => interval '16 years');

insert into testinterval VALUES
('1995-12-28', 12),
('1942-04-01', 3),
('2005-02-01', 4);

select * from chunk_relation_size_pretty('testinterval');

Results:

postgres=# select ranges  from chunk_relation_size_pretty('testinterval')
;
                         ranges
---------------------------------------------------------
 {"['1985-10-08 17:00:00-07','2001-07-16 17:00:00-07')"}
 {"['1938-06-17 17:00:00-07','1954-03-25 17:00:00-07')"}
 {"['2001-07-16 17:00:00-07','2017-04-23 17:00:00-07')"}
(3 rows)

Expected behavior I would expect that chunks with 16 year intervals would be created

Actual behavior Chunks of slighly less than 16 years are created. Similarly, if you substitute any larger number than 16 years the same size chunks end up being created (e.g. 26 years)

NunoFilipeSantos commented 3 years ago

@tylerfontaine for prioritisation efforts: Is this a customer or something you experimented with?

tylerfontaine commented 3 years ago

@NunoFilipeSantos I discovered it in the course of working with a customer, but it isn't preventing any significant work. It was just unexpected behavior.