As you see, the result is correct to 14 significant digits, which is what you get with just double arithmetic.
Update:
After trying to port several double-double libraries from different languages, I have come to the conclusion that it can't be done reliably in C#. According to the specification the C# compiler is free to choose between 64 bit and 80 bit precision for double calculations. As the double-double implementations rely on a specific precision, it can't be done in plain C#.
I was checking out this library, but I find that it doesn't produce results that are correct to higher precision than just double precision.
First off, here is an implementation for you of the conversion from decimal that doesn't truncate the value to double precision:
Using that I can successfully convert a decimal value to DdReal and back to decimal without losing any precision.
I put together this to test the correctness of the arithmetic:
Example output:
As you see, the result is correct to 14 significant digits, which is what you get with just double arithmetic.
Update:
After trying to port several double-double libraries from different languages, I have come to the conclusion that it can't be done reliably in C#. According to the specification the C# compiler is free to choose between 64 bit and 80 bit precision for double calculations. As the double-double implementations rely on a specific precision, it can't be done in plain C#.
Ref: https://stackoverflow.com/questions/6683059/is-floating-point-math-consistent-in-c-can-it-be