Open Thomick opened 1 year ago
The second solution seems like the right way to do it (actually used in science) to consider only the relevant numbers and ignore the artifacts from the approximate conversion. However, it would be best if the user was also able to put a hard limit on the number of decimal places of the output.
The result of a conversion rarely lands onto a value with the same number of decimal places as the input. Would it be possible to add a way to tune the precision of the output to the need of the user. Can this be done in a smart way such that the precision of the output is automatically adapted to the number of decimal places actually used in the input. This feature requires an adaptive rounding of the value displayed in the output field (maybe similarly as what is proposed for #1).
Possible implementation: