Closed Nemonek closed 6 months ago
0.1 + 0.3 == 0.4
is true
because with double
values, some values can be exactly represented but some numbers cannot, and some imprecise values happen to be exactly equal in their current representation even though they might not actually be equal mathematically. So some numbers can be directly compared but some cannot. I deliberately picked 0.1
and 0.2
to compare to 0.3
because they cannot be compared, as proven by the result.
Think about what var x = 0.1;
would mean. As explained in the book, a 0.1
is a literal double
value. So the compiler would turn the var
into double
.
In the same way, if you say decimal x = 0.1;
then the 0.1
is a double
. A double
cannot be implicitly converted to a decimal
because it is potentially imprecise. You have to explicitly cast it to tell the compiler that you accept the potential consequences, or use M
to indicate that you want to define a decimal
literal value instead of a double
literal value..
Thank you!
I was looking in the book to fix some imprecisions in my notes, and I noticed that when you do 0.1+0.2 == 0.3 , like you said in the book, it's false, but why 0.1 + 0.3 == 0.4 is true? Then, I'm not sure I understood why we need to put an M in this: