w3c / qt3tests

Tests for XPath and XQuery
27 stars 17 forks source link

Decimal digit precision and op-numeric-dividedec2args-4 #50

Open benibela opened 1 year ago

benibela commented 1 year ago

about the decimal type the spec says:

For the xs:decimal type, the maximum number of decimal digits (totalDigits facet) MUST be at least 18. This limit SHOULD be at least 20 digits in order to accommodate the full range of values of built-in subtypes of xs:integer, such as xs:long and xs:unsignedLong.

Thus it is correct to perform all calculations with 18 digit total precision

But an expected outcome of op-numeric-dividedec2args-4 is -1.619760582531006901, which has 18 digit fraction precision, but 19 digit total precision.

michaelhkay commented 1 year ago

I'm afraid we inherited this muddle from XSD.

XSD 1.0 part 2 states (for xs:decimal) "All [·minimally conforming·] processors [·must·] support decimal numbers with a minimum of 18 decimal digits (i.e., with a [·totalDigits·] of 18)"

But unsignedLong is a subtype of xs:decimal and has a max value of 18 446 744 073 709 551 615 which is 20 digits. So the limit of 18 is clearly nonsense.

In XSD 1.1 the limit is reduced to 16: See §5.4

All [·minimally conforming·] processors must support [decimal] values whose absolute value can be expressed as i / 10k, where i and k are nonnegative integers such that i < 1016 and k ≤ 16 (i.e., those expressible with sixteen total digits).

This seems to be a complete mess. How can the value space of xs:unsignedLong be a restriction of the value space of xs:decimal if xs:decimal only supports 16 digits?

It is not correct to assume that because implementations MUST support 18 digits, then they MUST perform all calculations with 18 digit precision. They MAY support more than 18, and therefore they MAY support greater precision in arithmetic.

The test your refer to actually reads

<test>fn:round-half-to-even((xs:decimal("-999999999999999999") div xs:decimal("617375191608514839")),18)</test>
      <result>
         <any-of>
            <assert-eq>-1.619760582531006901</assert-eq>
            <assert-eq>-1.619760582531</assert-eq>
         </any-of>
      </result>

Certainly the first result is a legitimate result, because a processor is allowed to perform division to any precision it chooses, and the test then reduces this to 18 digits after the decimal point. I'm having more trouble understanding the second result, but presumably it's there because some implementer submitted it as a valid result and this was accepted. I can't see why it's valid, but I can see why results with intermediate precision might also be legitimate.