Closed stevendanna closed 2 months ago
Hi @stevendanna, please add branch-* labels to identify which branch(es) this C-bug affects.
:owl: Hoot! I am a Blathers, a bot for CockroachDB. My owner is dev-inf.
cc @mgartner
I don't think this is a bug. When a custom query plan is used, the query plan is optimized with known values for the placeholders. The FoldNullBinaryRight
normalization rules (see here) performs the following transformation for the computed column expression (col1 + col2) + col3)
:
(Plus
(Plus (Const 9223372036854775807) (Const 507217562))
(Null)
)
=>
NULL
This prevents the execution engine from ever performing the overflowing addition operation.
With a generic query plan, no optimization occurs after the placeholder values are known. Therefore, this same transformation never occurs, and the addition operation overflows.
The optimizer is free to either perform this transformation or not, so both outcomes are valid.
FWIW, this always results in an overflow error in Postgres:
DROP TABLE IF EXISTS t;
CREATE TABLE t (
col1 INT8 PRIMARY KEY,
col2 INT4 NULL,
col3 INT4 NULL,
col4 INT8 NULL GENERATED ALWAYS AS ((col1 + col2) + col3) STORED
);
PREPARE p AS INSERT INTO t (col1, col2, col3) VALUES ($1, $2, $3);
EXECUTE p(9223372036854775807, 507217562, NULL);
-- psql:tmp.sql:15: ERROR: 22003: bigint out of range
-- LOCATION: int84pl, int8.c:915
👍 Thanks for taking a look and for the detailed explanation.
Describe the problem
When
plan_cache_mode
is set toforce_generic_plan
some prepared inserts behave differently when processing a computed column that technically overflows but which can also be evaluated to NULL since one of the inputs are NULL.At least, that is the nature of the following reproduction:
Setup:
Working insert:
Not working:
Jira issue: CRDB-40376