markjprice / cs10dotnet6

Repository for the Packt Publishing book titled "C# 10 and .NET 6 - Modern Cross-Platform Development" by Mark J. Price
853 stars 373 forks source link

Why does 0.1+0.2 not equal 0.3, but 0.1+0.3 equals 0.4? And why we need the M to declare a decimal literal value? #120

Closed Nemonek closed 6 months ago

Nemonek commented 1 year ago

I was looking in the book to fix some imprecisions in my notes, and I noticed that when you do 0.1+0.2 == 0.3 , like you said in the book, it's false, but why 0.1 + 0.3 == 0.4 is true? Then, I'm not sure I understood why we need to put an M in this:

decimal x = 0.1M;
markjprice commented 1 year ago

0.1 + 0.3 == 0.4 is true because with double values, some values can be exactly represented but some numbers cannot, and some imprecise values happen to be exactly equal in their current representation even though they might not actually be equal mathematically. So some numbers can be directly compared but some cannot. I deliberately picked 0.1 and 0.2 to compare to 0.3 because they cannot be compared, as proven by the result.

Think about what var x = 0.1; would mean. As explained in the book, a 0.1 is a literal double value. So the compiler would turn the var into double.

In the same way, if you say decimal x = 0.1; then the 0.1 is a double. A double cannot be implicitly converted to a decimal because it is potentially imprecise. You have to explicitly cast it to tell the compiler that you accept the potential consequences, or use M to indicate that you want to define a decimal literal value instead of a double literal value..

Nemonek commented 1 year ago

Thank you!