Closed klevente closed 9 months ago
I've seen bugs raised around this behaviour before but thought it was fixed by this MR.
I'm able to reproduce it via your repo though, so there's probably a second issue there too. It's not a regression since that change, as I can reproduce it directly on the fix commit.
FYI: If anyone else hits issues starting the docker container, switch the image from postgres
to postgres:12-bullseye
.
Hey @Tony-Samuels, thanks for the swift reply! If you need any more repro steps or anything else, please let me know. Unfortunately I'm not that well-versed in decimal handling, but I'll try looking into the code as well to see what might be going wrong.
Thank you for raising this - it's a very interesting edge case. I've created a fix for this which effectively bounds the scale at 28. While there are some legitimate cases for a scale of 29 - it can cause undefined behavior so we typically avoid anything above 28. Consequently, after bounding it - things work "as expected" (assuming rounding to fit into the decimal type is desired).
Thanks @paupino, really appreciate the fix! I think for this case, rounding is acceptable for now, until a configuration option or feature flag gets added where the user can control whether they'd like rounding or not - for me, rounding is perfectly fine.
Hi,
I discovered an issue regarding the
db-postgres
feature of the library when trying to deserializeNUMERIC
values coming from Postgres.In the case when a value has a large number of digits after the decimal point, the code can produce invalid
Decimal
instances, which seem to work by themselves, but will cause unexpected behavior when doing operations with them (ex. summing them with anotherDecimal
).I created an example repository that reproduces the problem, but will also explain a bit more here.
Consider that we have the following value inside a Postgres table:
0.100000000000000000000000000001
, stored as aNUMERIC
(without any scale or precision specifications). When running a query to get this value,rust_decimal
successfully deserializes it to aDecimal
instance. When printing out this value using itsDisplay
implementation, we get the following:0.10000000000000000000000000000
, meaning that it supposedly performed some sort of truncation to make the value fit inside the boundaries the library adheres to.However, when we check the value of the
scale
field inside the instance, we find that it is29
- which is incorrect, since an internal comment states that valid scale values are numbers between0
and28
.The linked repository above highlights this issue, and also shows what happens if we try to use the invalid value as part of operations, leading to incorrect results (even
checked_add
could not detect the presence of an invalid value).I understand that this is a tricky issue, and that there are a few different ways to handle this. From a library user's point of view, using the current API, I'd expect that the deserialization from
NUMERIC
toDecimal
always produces a valid instance, even if it requires some rounding, but I'm open to other approaches as well. Is it a reasonable assumption, or are there some "hidden" constraints one should adhere to when working with this library and Postgres?