Open lkshminarayanan opened 1 year ago
This happens because constraint checking makes us decompress the compressed segment into the uncompressed chunk twice at the same time and we hit the unique constraint violation.
Same thing happens with UPDATE/DELETE since in principle the same conditions apply.
Same thing happens with UPDATE/DELETE since in principle the same conditions apply.
What type of bug is this?
Unexpected error
What subsystems and features are affected?
Compression
What happened?
When two
INSERT INTO ... ON CONFLICT DO NOTHING
queries are run on the same compressed tuple in parallel sessions, one of them ends up throws the following error :ERROR: duplicate key value violates unique constraint "_hyper_1_1_chunk_uidx" DETAIL: Key (device_id, "time")=(1, 2000-01-06 03:30:00+05:30) already exists.
It should not throw this error as the INSERT query uses
ON CONFLICT DO NOTHING
.TimescaleDB version affected
2.10.1
PostgreSQL version used
15.2
What operating system did you use?
Ubuntu 22.04
What installation method did you use?
Source
What platform did you run on?
On prem/Self-hosted
Relevant log output and stack trace
Begin INSERT INTO in Session 1 :
Begin INSERT INTO in Session 2 and observe it waiting for the lock :
Commit Session 1:
Session 2 throws an error:
How can we reproduce the bug?
Session 1 - Setup Tables and BEGIN TRANSACTION
Session 2 - Execute the parallel INSERT INTO DML.
Commit Session 1
After the commit in Session1, Session 2 will fail with the above mentioned error.