Closed JamesGuthrie closed 1 month ago
Crash
Compression
Trying to compress a chunk belonging to a hypertable with closed dimensions results in a segfault.
I expect the chunk to either be compressed, if possible, or throw an error, if not possible.
2.15.1
16.3
Docker on macOS
Docker
Not applicable
(gdb) bt #0 find_chunk_to_merge_into (current_chunk=0xaaaae50e28c8, ht=0xaaaae4fff8d8) at /build/timescaledb/tsl/src/compression/api.c:261 #1 compress_chunk_impl (hypertable_relid=<optimized out>, chunk_relid=119419) at /build/timescaledb/tsl/src/compression/api.c:415 #2 0x0000ffff26aa787c in tsl_compress_chunk_wrapper (chunk=<optimized out>, if_not_compressed=if_not_compressed@entry=true, recompress=recompress@entry=false) at /build/timescaledb/tsl/src/compression/api.c:746 #3 0x0000ffff26aa8cb4 in tsl_compress_chunk (fcinfo=0xaaaae551c328) at /build/timescaledb/tsl/src/compression/api.c:700 #4 0x0000aaaad5d67624 in ExecInterpExpr (state=0xaaaae551be58, econtext=0xaaaae50e5dd0, isnull=<optimized out>) at executor/./build/../src/backend/executor/execExprInterp.c:758 #5 0x0000aaaad5da4e48 in ExecEvalExprSwitchContext (isNull=0xffffd09f0907, econtext=0xaaaae50e5dd0, state=<optimized out>) at executor/./build/../src/include/executor/executor.h:355 #6 ExecProject (projInfo=<optimized out>) at executor/./build/../src/include/executor/executor.h:389 #7 ExecResult (pstate=<optimized out>) at executor/./build/../src/backend/executor/nodeResult.c:136 #8 0x0000aaaad5d708c4 in ExecProcNode (node=0xaaaae50e5cc8) at executor/./build/../src/include/executor/executor.h:273 #9 ExecutePlan (execute_once=<optimized out>, dest=0xaaaae4d35680, direction=<optimized out>, numberTuples=0, sendTuples=<optimized out>, operation=CMD_SELECT, use_parallel_mode=<optimized out>, planstate=0xaaaae50e5cc8, estate=0xaaaae50e58b0) at executor/./build/../src/backend/executor/execMain.c:1670 #10 standard_ExecutorRun (queryDesc=0xaaaae50cfcf8, direction=<optimized out>, count=0, execute_once=<optimized out>) at executor/./build/../src/backend/executor/execMain.c:365 #11 0x0000aaaad5f5a9dc in ExecutorRun (execute_once=<optimized out>, count=0, direction=ForwardScanDirection, queryDesc=0xaaaae50cfcf8) at executor/./build/../src/backend/executor/execMain.c:309 #12 PortalRunSelect (portal=portal@entry=0xaaaae4db7390, forward=forward@entry=true, count=0, count@entry=9223372036854775807, dest=dest@entry=0xaaaae4d35680) at tcop/./build/../src/backend/tcop/pquery.c:924 #13 0x0000aaaad5f5c408 in PortalRun (portal=portal@entry=0xaaaae4db7390, count=count@entry=9223372036854775807, isTopLevel=isTopLevel@entry=true, run_once=<optimized out>, dest=dest@entry=0xaaaae4d35680, altdest=altdest@entry=0xaaaae4d35680, qc=qc@entry=0xffffd09f0d20) at tcop/./build/../src/backend/tcop/pquery.c:768 #14 0x0000aaaad5f5e414 in exec_execute_message (max_rows=9223372036854775807, portal_name=0xaaaae4d35170 "") at tcop/./build/../src/backend/tcop/postgres.c:2229 #15 PostgresMain (dbname=<optimized out>, username=<optimized out>) at tcop/./build/../src/backend/tcop/postgres.c:4721 #16 0x0000aaaad5ebf478 in BackendRun (port=0xaaaae4d6d480, port=0xaaaae4d6d480) at postmaster/./build/../src/backend/postmaster/postmaster.c:4464 #17 BackendStartup (port=0xaaaae4d6d480) at postmaster/./build/../src/backend/postmaster/postmaster.c:4192 #18 ServerLoop () at postmaster/./build/../src/backend/postmaster/postmaster.c:1782 #19 0x0000aaaad5eb52d8 in PostmasterMain (argc=<optimized out>, argv=<optimized out>) at postmaster/./build/../src/backend/postmaster/postmaster.c:1466 #20 0x0000aaaad5b5d6d8 in main (argc=1, argv=0xaaaae4c9bba0) at main/./build/../src/backend/main/main.c:198
CREATE TABLE test_by_hash(id BIGINT, value float8); SELECT create_hypertable('test_by_hash', by_hash('id', 8)); ALTER TABLE test_by_hash SET (timescaledb.compress = true); INSERT INTO test_by_hash VALUES (1, 1.0), (2, 2.0), (3, 3.0); SELECT compress_chunk('_timescaledb_internal._hyper_1_1_chunk');
Hmm having a closed primary dimension might break some assumptions. This was not possible with the old API and in dev builds this will be prevented.
What type of bug is this?
Crash
What subsystems and features are affected?
Compression
What happened?
Trying to compress a chunk belonging to a hypertable with closed dimensions results in a segfault.
I expect the chunk to either be compressed, if possible, or throw an error, if not possible.
TimescaleDB version affected
2.15.1
PostgreSQL version used
16.3
What operating system did you use?
Docker on macOS
What installation method did you use?
Docker
What platform did you run on?
Not applicable
Relevant log output and stack trace
How can we reproduce the bug?