Fixes casting of numeric values in Normalize insert-into-select statements in the Clickhouse connector
Changes the behaviour of our default precision and scale setting. We now first check if we can reduce the precision to the maximum supported value, and only if not possible do we set it to our default. So for example for a numeric column on Postgres of type numeric(78,70):
Previous behaviour for Clickhouse:
Since 78 is higher than the maximum supported precision on Clickhouse, we would set it to PeerDB's chosen default precision (76) and default scale (38)
New behaviour:
We first see that the precision is above max so we just change precision to the max of 76.
Then we check if the new (precision, scale) tuple is valid. If not, then fallback to the behaviour from before and set it to defaults chosen by PeerDB. In this case it will now be valid, so the numeric on target will be Decimal(76, 70). Notice how we do not change the scale unlike above
Motivation:
This allows us to now instead of setting a numeric like (78,0) to (76,36), to now just set it to (76,0) - which is more supportive than (76,36) when the incoming numerics are not going to have any fractional parts
This PR:
numeric(78,70)
:Previous behaviour for Clickhouse: Since 78 is higher than the maximum supported precision on Clickhouse, we would set it to PeerDB's chosen default precision (76) and default scale (38)
New behaviour: We first see that the precision is above max so we just change precision to the max of 76. Then we check if the new (precision, scale) tuple is valid. If not, then fallback to the behaviour from before and set it to defaults chosen by PeerDB. In this case it will now be valid, so the numeric on target will be
Decimal(76, 70)
. Notice how we do not change the scale unlike aboveMotivation: This allows us to now instead of setting a numeric like (78,0) to (76,36), to now just set it to (76,0) - which is more supportive than (76,36) when the incoming numerics are not going to have any fractional parts