Open phemmer opened 3 years ago
Ok, so this issue is actually much worse than originally described, the blocking also affects normal read queries on the table.
Looks like #2669 might be related to this. Though that one is for decompress, not compress.
I'm also having this issue when using the backfill script on large data set. The compress/decompress is locking read queries on the hypertable that contains the compressed chunks.
Currently running timescale 2.4.2 on Postgres 13.
Is there a way to prevent this lock on read queries somehow?
having this issue when my policy defined compression job runs. Blocks all writes to the table until the job is done even though an old chunk that is not being written to is being compressed. Makes compression unusable for high throughput...
I am experiencing the same thing. How is compression deployed in the real world? Is concurrently compressing a chunk and being able to read from the table a paid feature?
At this point, I might recommend Timescale change their official recommendations, and the default (7 days), on the size of chunks. A few reasons for this.
compress_chunk_time_interval
, multiple smaller chunks can be rolled into larger ones.So it seems like the recommended/default chunk size was established back when Timescale was new, and has not kept up with the state of things.
And honestly for me, due to all of the above, I'm considering shrinking our chunks down to 15 minutes.
Relevant system information:
postgres --version
): postgres (PostgreSQL) 12.4 (Debian 12.4-1.pgdg90+1)\dx
inpsql
): timescaledb | 2.0.0-rc4Describe the bug When compressing a chunk with
compress_chunk()
, after a certain point in the process, any queries withchunks_detailed_size()
will block (possibly other metadata information queries as well, not sure).To Reproduce Steps to reproduce the behavior:
select compress_chunk('mychunk');
select * from chunks_detailed_size('mytable')
Expected behavior Successful response in a short amount of time.
Actual behavior On the data node:
...eventually timing out due to
statement_timeout
(in my case, 5 minutes).Additional context While I doubt it would be possible to not block at all, I think the blocking time should be reduced to a few seconds at most.