tc39 / proposal-intl-numberformat-v3

Additional features for Intl.NumberFormat to solve key pain points.
MIT License
53 stars 12 forks source link

Clarification on what "precision" means in roundingPriority: morePrecision/lessPrecision #96

Closed ptomato closed 1 year ago

ptomato commented 2 years ago

Hi, I landed here through reviewing the test262 tests for this proposal, specifically this change: https://github.com/tc39/test262/pull/3506

In short it's about the following code:

const formatter = new Intl.NumberFormat('en', {
  useGrouping: false,
  roundingPriority: "morePrecision",
  minimumSignificantDigits: 2,
  minimumFractionDigits: 2,
});
formatter.format("1");

There's a conflict here that roundingPriority is supposed to resolve. If we take minimumSignificantDigits the result is "1.0". If we take minimumFractionDigits the result is "1.00". The test262 test asserts that "1.0" is the correct result.

@romulocintra and I have stepped through the algorithm in the spec, on paper, and have verified that that test262 test conforms. The correct result according to the spec is "1.0". However, if I read the proposal's documentation I'm not really understanding why "1.0" should be correct. Here's what the documentation says:

roundingPriority: "morePrecision" means that the result with more precision wins a conflict.

With my user hat on, who reads the documentation but not the spec, I'd expect that "1.00" has more precision (3 significant digits) than "1.0" (2 significant digits), so the outcome should be that "1.00" wins the conflict.

Where is my understanding faulty? My guess is that the documentation is using a different meaning of "precision" than what I expected. Could the meaning of "precision" be clarified in the documentation?

(Note the documentation actually has this example in it already, but it doesn't mention what the result is :smile:)

sffc commented 2 years ago

Thank you for the issue! I agree that the result is a bit counter-intuitive. Here is my mental model. The choice between "more precision" and "less precision" is only made with respect to the maximum fraction or significant digits. The minimum digits, which only control trailing zeros, play no part in the "more precision" or "less precision" game.

Since maximum significant digits default to 21 and maximum fraction digits normally default to only 3, significant digits will almost always "win" on morePrecision mode unless they are overridden with the maximumSignificantDigits setting.

In other words, the surprising behavior only happens when minimum digits are set in the absence of maximum digits being set.


This algorithm was chosen because it solves our known use cases and reads cleanly in the spec. That said, I am open to exploring alternative algorithms, although doing so would need to meet a high bar since this is a Stage 3 proposal.

ptomato commented 2 years ago

I'm not sure anything would need to change, if only the meaning of "more precision" were explained more clearly in the document. Maybe "roundingPriority: "morePrecision" means that the result with more digits of maximum potential precision wins the conflict, regardless of the minimum number of digits"?

Alternatively if you did explore alternative algorithms, for reference here is roughly what I expected when I read the current documentation:

sffc commented 2 years ago

TC39-TG2 discussion: https://github.com/tc39/ecma402/blob/master/meetings/notes-2022-06-16.md#clarification-on-what-precision-means-in-roundingpriority-moreprecisionlessprecision-96

I plan to draft up the change both here and in ICU to make sure that it behaves the way we expect and satisfies all of the original use cases.

sffc commented 1 year ago

OK, I drafted https://github.com/unicode-org/icu/pull/2260 in ICU to see the impact of the proposed change.

What I found: it appears that all ICU tests pass (have the current behavior) both before and after this change; no expectations needed to be updated. I added a new test where I was able to demonstrate the change in behavior.

This tells me two things:

  1. If we make the change, it will continue to satisfy the original intent and use cases
  2. If we make the change, it is unlikely to make a big impact, since it's a bit hard to reproduce

We already know from experience that there really isn't a strong use case behind setting minimum digits without setting maximum digits. If you set both, this is a non-issue. The Test262 test is just fairly esoteric.

I am therefore inclined to update the docs without touching the spec or implementation.