Closed edigiacomo closed 3 years ago
Looking at the B table, the minimum acceptable value for B13011
seems to be -1: are you sure -9 is an acceptable value?
Looking at the B table, the minimum acceptable value for B13011 seems to be -1 * 10^1 = -10 for me
Looking at the B table, the minimum acceptable value for
B13011
seems to be -1: are you sure -9 is an acceptable value?
Sorry Enrico, I didn't comment the code and I was unclear: wreport converts ("-1"
(string) to-0.1
(number) then "-9"
should be -0.9
(that is an acceptable value). It seems that the interval check is made before applying the scale.
So, the problem is not in boundary checking, but in how the allowed variable range is computed, and it has been computed like this for a long long time. In my previous message I was giving for granted that this calculation was correct, and I'll now double check that: it might be that it's not.
Could it be that this is the first time we find problems with range of negative values?
The minimum value that can be represented by a BUFR variable, encoded in binary, is always 0.
The B table for B13011
has scale=1
, reference=-1
, length=14
.
If we convert 0 to a decimal value using the information in the B table, we have:
(0 + reference) / 2^scale = (0 + -1) / 10**1 = -1/10 = -0.1
It looks to me like the minimum value in the range is indeed 0.1 and not 1.
From the BUFR documentation:
For BUFR, the reference value is a number to be subtracted from the data after multiplication by the scale factor (if any) but before encoding into Section 4 in order to produce a positive value in all cases. For example, south latitude is negative before applying the reference value. If a position of 35.50 degrees south latitude were being encoded, multiplying -35.50 by 100 (scale of 2) would produce -3550. Subtracting the reference value of -9000 would give a value of 5450 that would be encoded in Section 4. To obtain the original value in decoding Section 4, adding back the -9000 reference value to 5450 would result in -3550, then dividing by the scale (100) would obtain -35.50.
Following this, if we encode -1 with as B13011, we have:
-1 * 10^scale - reference = -1 * 10^1 - (-1) = -10 + 1 = -9
which is not encodable, since the result of the encoding process must always be positive or 0.
Let me know if I followed some of these steps incorrectly.
Sorry, I thought that the range was correct, if the right lower value is -0.1 then the error with "-9"
is consistent.
And now I understand the error message:
OverflowError: Value -9 is outside the range [-1,16381] for 013011 (Total precipitation/total water equivalent)
that could be written as:
OverflowError: Value -0.9 is outside the range [-0.1,1638.1] for 013011 (Total precipitation/total water equivalent)
Last question: using a string or integer value for a numeric var means that the scale is not applied, then "12" = 12 = 1.2
. Is this behaviour documented?
It doesn't seem to be documented. This behaviour is unchanged since the very first implementations 15 years or so ago, and I guess it was always taken for granted.
Setting and getting decimal values as integers is the interface one can use to deal with exact values regardless of limitations of floating point formats.
I'll now try to update the documentation of the various getters/setters
I made an attempt at documentation at https://github.com/ARPA-SIMC/wreport/commit/8c9aa727b02a8d4d6f597e26c10aacbd152af83c
As usual, my head is so deep in the backend that I have a hard time explaining things from the library user's point of view. I could use a review of the explanations, ideally with suggestions for better wording.
The idea is that getting a decimal value as an int without using the unscaled version, makes no sense: it'd actually have the decimal component truncated, or easily go out of bounds of the integer representation.
Working with doubles/floats, however, has its limitations, since they are approximated representations. A famous example:
>>> print(1.1+2.2)
3.3000000000000003
>>> print(1.1+2.2==3.3)
False
If one wants to, for example, index coordinates by latitude longitude, scaling the decimals by a power of ten representing the wanted accuracy might be a better strategy to get good matches in an index. Since BUFR/CREX variables already have a well specified number of digits of accuracy, and wreport stores all values unscaled internally, and since it doesn't make sense to use seti
/enqi
to work with unscaled values shoehorned into ints, we have always used them as accessors for that internal representation.
The explanation is very clear, thank you! I will regenerate the online documentation as soon as possible.
From my point of view, we could close the issue, I assign it to @pat1.
I heard @pat1 on the phone and we can close the issue.
It seems that the scale is applied after checking the range:
print('"-9" =', wreport.Var(table["B13011"], "-9"))
should print-0.9
instead of throwing an error.