Open littledan opened 4 years ago
Fabrice Bellard said,
Regarding the precision, I agree that going to 128 bits is tempting because it avoids using too much memory and may be enough for practical use. On the other hand, the memory problem is already present with BigInt. I think it is a question to ask to the potential users. Personally, even if 128 bit default precision (i.e. 34 digits) is chosen, I think it would be interesting to keep the ability to change the default precision.
Optional bounded precision: it could be possible to add the ability to do computations with a finite default precision. If the feature is necessary, I suggest to do it only on nested blocks so that it is not possible to change the default precision outside well defined code. For example, in QuickJS BigFloat, the only way to change the default precision is to call BigFloat.setPrec(func, precision) to execute func() with the new default precision "precision".
I would suggest BigDecimal.setPrec(func, prec) as for the QuickJS bigfloat. The precision is changed only during the execution of "func()". The previous one is restored after setPrec or in case of exception.
Maybe it was not clear but I suppose that no precision is attached to the bigdecimal values. The precision only applies to the operations.
I could see the setPrec
function, using dynamic scoping, as somewhat less bad than a simple global variable. But it still seems really undesirable to me, as it's anti-modular: you may call into code that you don't realize uses BigDecimal, unintentionally changing its precision. To make a reliable library, you'd have to guard your own exported code with setPrec
, which doesn't seem so great. I'd prefer if we can either agree on a global, fixed precision (as many languages have, e.g., C# and Swift), or use unlimited precision.
After some more thought and discussion with @waldemarhorwat, I've decided to leave the question of BigDecimal vs Decimal128 undecided for a bit longer, and investigate both paths in more detail.
I work on Ethereum-based financial software. The largest integer in Ethereum is a uint256
. In practical terms this means the largest decimal we need to be able to represent is 115792089237316195423570985008687907853269984665640564039457584007913129639935
. The smallest decimal is 0.000000000000000000000000000000000000000000000000000000000000000000000000000001
. decimal128
, with 34 significant digits, cannot represent these numbers.
@novemberborn If you're currently using a uint256, would BigInt work for your needs? How are those decimals currently represented?
@littledan we've ended up with a few representations unfortunately.
While we can represent the raw value as a BigInt, this isn't actually useful. The smallest unit of ETH is a Wei, but thinking of ETH as 1000000000000000000n
Wei just hurts everybody's head. And that's before we want to calculated the USD equivalent of a given ETH balance.
Can you say more about the representations you're using now? You only mentioned uint256 (of Wei?)--I'd be interested in hearing about the others.
I haven't worked much with the representation we use in our databases. We'll looking at cleaning this up so I'll know more in the next few weeks hopefully.
On the wire, we either use decimal strings, or a '1'
integer string with an exponent value of 78
.
Coming in cold to this discussion, but it seems that there aren't any arguments here against the arbitrary-precision approach. The arbitrary precision approach would support options on various operations that would allow one to specify precision, thereby (potentially) gaining some speed & memory benefits in certain use cases, such as when knows, e.g., that at most 10 digits are needed for any calculation. There was a reference to a discussion with @waldemarhorwat. Are the concerns still valid?
The brittleness and complexity concerns from past discussions on this topic haven't changed. See the past discussions on this topic to understand the problems and dangers that appear with unlimited precision. If precision is an option, what happens when one doesn't specify it? How does one specify it for an arithmetic operator such as +
?
This proposal suggests adding arbitrary-precision decimals. Another alternative would be to add decimals with fixed precision. My current thoughts on why to go with arbitrary precision decimal (also in the readme):
JavaScript is a high-level language, so it would be optimal to give JS programmers a high-level data type that makes sense logically for their needs, as TC39 did for BigInt, rather than focusing on machine needs. At the same time, many high-level programming languages with decimal data types just include a fixed precision. Because many languages added decimal data types before IEEE standardized one, there's a big variety of different choices that different systems have made.
We haven't seen examples of programmers running into practical problems due to rounding from fixed-precision decimals (across various programming languages that use different details for their decimal representation). This makes IEEE 128-bit decimal seem attractive. Decimal128 would solve certain problems, such as giving a well-defined point to round division to (simply limited by the size of the type).
However, we're proposing unlimited-precision decimal instead, for the following reasons: