francof2a / fxpmath

A python library for fractional fixed-point (base 2) arithmetic and binary manipulation with Numpy compatibility.
MIT License
179 stars 21 forks source link

Possible bug: Created corrupted value which breaks arithmetic #38

Closed Pet3ris closed 3 years ago

Pet3ris commented 3 years ago
from fxpmath import Fxp

def test_pure_q96():
    x = Fxp(1.0001, dtype="Q64.96") ** (Fxp(4055, dtype="Q64.96") / 2)

    print(x)
    print(x/2)

The output is:

1.2247538232335937
2.036149027162537e-13

The first value seems right, the second should only be half as much as the first?

francof2a commented 3 years ago

The second output is wrong because size of x is fxp-s54/52 (Q2.52). This new size is because exponent is not an integer. In this case, fxpmath performs power calculation using floating precision and then searches for optimal size for the result. The reason to do this in that way is that output fractional bits couldn't be found before.

When x/2 is performed, constant 2 is considered with same size that fxp variable. So, value of 2 can't be represented by a Fxp object with 1 bit for integer part (remember that x has Q2.52 size).

I think that an output with size Q64.96 would be useful for x in your case. If I would right following example should work for you:

x = Fxp(1.0001, dtype="Q64.96", op_out_like=Fxp(None, dtype="Q64.96")) ** (Fxp(4055, dtype="Q64.96") / 2)

print(x)
print(x/2)

Outpus are:

1.2247538232335937
0.6123769116167969

The parameter op_out_like is a field of config, a data structure to configure the Fxp behavior. op_out_like define the format of the Fxp object for the result of an arithmetic operation.

You could set a config template for all Fxp object.

There are more ways to get the result with a particular size.

Pet3ris commented 3 years ago

This is helpful @francof2a, thank you. So is there no pow function that directly operates on Q64.96 built-in? Also, some on StackOverflow suggested that Q64.96 could not be represented due to numpy limitations? Is that also true or there is no underlying issue with using Q64.96?

francof2a commented 3 years ago

You're welcome @Pet3ris.

It's true that numpy has a limitation to deal with precision greater than 64 bits, because its kernel is based on float calculation. In some case you can use object class with numpy, but this is a python operation under the hood.

fxpmath deals with that problem and does calculations using fixed point arithmetic whenever fractional precision of result can be solved. In your example 4055/2 = 2027.5 is not an integer, so a root calculation is implicit. In a general way, a fractional exponent (it could be even a integer value represented in a Fxp object with n_frac != 0) implies a float calculation to get a result and a optimal fractional bit number to store the result.

In other words, fxpmath can deal with Q64.96 arithmetic doing fixed point calculations, whenever the precision of result could be resolved. But some limitations exist depending if python could solved those int calculations, for example:

Fxp(1.0001, dtype="Q64.96")**(Fxp(16, n_frac=0))  # n_frac=0 could be omitted 

will rise an error (OverflowError: (34, 'Numerical result out of range')) because internal calculations that python can not solved (maybe there are workarounds, but by now there are not implemented in fxpmath).

Why is there an error? By default fxpmath configuration uses the optimal size for the output (considering the larger number that could be stored in Q64.96 and powered by the largest number inQ6.0 (the Fxp created to store the exponent equal to 16)... well that number is huge and python can not deal with it. Other example:

Fxp(1.0001, dtype="Q64.96")**(Fxp(4, n_frac=0))

fxp-s638/384(1.000400060004)

Look that the result is a Q254.384, huge also, but python can deal with it.

I guess you don't need that precision increase, and result keeps Q64.96. In that case you won't have all these problems, but you have to configure fxpmath to keep that size.

Fxp(1.0001, dtype="Q64.96", op_sizing='same')**(Fxp(16))

fxp-s160/96(1.0016012005601818)

Now, calculation is performed! And the result has dtype = Q64.96. If we try your example:

Fxp(1.0001, dtype="Q64.96", op_sizing='same')**(Fxp(4055/2))

fxp-s160/96(1.2247538232335937)

We've got right result and the calculation were performed in fixed point, not using float.

There are some important considerations here. Let's do next:

# forcing result to use same size that operand: op_sizing='same'
z = Fxp(1.0001, dtype="Q64.96", op_sizing='same')**(Fxp(4055/2))
z.bin(frac_dot=True)

# configure to store result in a Fxp with arbitrary size: op_out_like=Fxp(None, dtype="Q64.96")
x = Fxp(1.0001, dtype="Q64.96", op_out_like=Fxp(None, dtype="Q64.96")) ** (Fxp(4055, dtype="Q64.96") / 2)
x.bin(frac_dot=True)

'0000000000000000000000000000000000000000000000000000000000000001.001110011000100101110111011100000111000001110010101100001011011000010101000000011101100101000010'

'0000000000000000000000000000000000000000000000000000000000000001.001110011000100101110111011100000111000001110010101100000000000000000000000000000000000000000000'

Compare the last bits of both results, you'll see that last computation doesn't have the precision of Q64.96 because 64 bits of float calculation of numpy.

One last important thing: Note that I used Fxp(4055/2) instead of Fxp(4055, dtype="Q64.96") / 2 and it was because the first one only use a Q12.1, demanding less computation time. You may experience calculation time problems using op_sizing='same' and an Q64.96 exponent, due to the huge size of its fractional part.

%timeit x = Fxp(1.0001, dtype="Q64.96", op_sizing='same') ** (Fxp(4055, dtype="Q64.8") / 2)

34.8 s ± 249 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

A lot of time! I am very convinced that this can be improved with a better implementation of power function!

a tip: if you want work with Fxp objects which all of them has op_sizing='same', you can configure a template doing

Fxp.template = Fxp(dtype='Q64.96', op_sizing='same')

# since here all Fxp created without explicit size will be `Q64.96` with op_sizing='same'

z = Fxp(1.0001) # same that z = Fxp(1.0001, dtype="Q64.96", op_sizing='same')
# ...
z = Fxp(1.0001)**(Fxp(4055/2, dtype='Q16.2')) 
z.info(3)

Fxp.template = None # disable template
Pet3ris commented 3 years ago

Thank you @francof2a, this is a fantastic reference and love the template approach! Nice to know that the precision can be forced as that's exactly what I was looking for (since I'm modelling for an environment where floating point numbers are not available, but large integers are).

francof2a commented 3 years ago

Great! I saw the StackOverflow question about this. Is it okay with you if I write an answer referring to this issue?

If you consider that the problem is solved for you, please close the issue, or feel free to ask about any remaining doubts.

Thanks for using fxpmath and post the issue!

Pet3ris commented 3 years ago

More than OK and I will definitely be accepting this amazing answer :).