Open comphead opened 1 year ago
cc @viirya
This is because cast kernel at upstream doesn't check precision overflow, although it checks casting overflow. I've submitted a change at the upstream for this: https://github.com/apache/arrow-rs/pull/4866
On main this now produces
> select cast(1.1 as decimal(2, 2)) + 1;
Arrow error: Invalid argument error: 110 is too large to store in a Decimal128 of precision 2. Max is 9
@comphead should we close this issue?
Describe the bug
A decimal conversion query is not consistent with PG or Spark
To Reproduce
the same query returns null in Spark in non ansi mode
numeric field overflow
error in PGExpected behavior
Should be consistent
Additional context
No response