Closed taylorjdlee closed 1 year ago
But 102*0.4 = 40.8, not 48?
Yep my bad I meant 40.8 it was a typo. Seems to be an issue with floating point arithmetic calculation, this line specifically: https://github.com/EFeru/DbcParser/blob/59b4eb633e4fdf7f02858b2e27484be89287a570/DbcParserLib/Packer.cs#L108
I guess we should maybe make sure the output of Packer.RxSignalUnpack is based on the number of decimal points used in factor? As in theory the unpacked values should never contain more decimal points than the factor
So on simple solution is making the use of decimal instead of double i.e an example line would be
(decimal)(iVal * signal.Factor + signal.Offset))
this should give the correct value of 40.8 but then again we're using 128 bits instead of 64 bits to store the value. We can do this or round the double value based on the decimal point value of the factor for example:
string factorStr = signal.Factor.ToString();
int decimalPlaces = factorStr.Length - factorStr.IndexOf('.') - 1;
var result = (double)iVal * signal.Factor + signal.Offset;
result = Math.Round(result, decimalPlaces);
Another potential option is which allows us to keep double without storing anything in decimal:
(double)(iVal * (decimal)signal.Factor + (decimal)signal.Offset)
Let me know your thoughts
Some examples of factors with a large number of decimal places to show the importance of this issue:
@Adhara3 @EFeru Let me know your thoughts, personally I like the idea of doing the calculations with decimal but storing the calculated value in a double
Fixed by #40
Hi,
assuming that factor
and offset
can always be represented as decimal
I would then change the type of the property instead of casting. Packing/Unpacking may end up in a tight loop, I would avoid useless casting.
Moreover, beside the above probable premature optimization changing the type would make clear that we want to avoid rounding errors even when parsing.
Cons: double
arithmetics are easier, decimal
do not provide some operands, changing the type would break the API.
Citing MSDN
The decimal type is appropriate when the required degree of precision is determined by the number of digits to the right of the decimal point. Such numbers are commonly used in financial applications, for currency amounts (for example, $1.00), interest rates (for example, 2.625%), and so forth. Even numbers that are precise to only one decimal digit are handled more accurately by the decimal type: 0.1, for example, can be exactly represented by a decimal instance, while there's no double or float instance that exactly represents 0.1. Because of this difference in numeric types, unexpected rounding errors can occur in arithmetic calculations when you use double or float for decimal data. You can use double instead of decimal when optimizing performance is more important than ensuring accuracy. However, any difference in performance would go unnoticed by all but the most calculation-intensive applications.
@taylorjdlee said:
Some examples of factors with a large number of decimal places to show the importance of this issue
I'm not sure I get the point here. With many decimal places double
should be better as a representation. The issue may appear only on quite rounded numbers with few digits as 40.8 was.
Bottom line: to me this wasn't a big issue, the error was negligible and floating point comparisons are generally discuraged and finally because rounding is probably something I would let to the user in specific scenarios (e.g financials).
My 2 cents A
@Adhara3 Hmm yeah you do bring up some good points. I guess maybe we do let the end user handle this then? Or provide them with both options i.e decimal and double calculations?
We could also revert to double, but it would still be nice to provide accuracy with decimal calculations if the end user wants it. I just want the Unpack function to produce values accurate to the expected value i.e 40.8
Please also note that
var incorrectValue = 40.800000000000004;
Console.WriteLine((decimal)incorrectValue)
// You get: 40.8
Just to say that this errors could be fixed by the user and to show that we lose precision somehow in using decimal
.
A
So Packer.RxSignalUnpack can produce incorrect values for example I have the following byte array using this code snippet:
I then have these decoding rules highlighted in blue:
So looking at the decoding rules we are looking at the 7th byte which is 66. So 66 hex equals 102 in decimal which is 102 x 0.4 when decoded which is 40.8 exactly. Which is not produced by Packer.RxSignalUnpack it acutally gives a value of 40.800000000000004 which is incorrect. I've only just found this bug but I will dig deeper