google / TensorNetwork

A library for easy and efficient manipulation of tensor networks.
Apache License 2.0
1.82k stars 359 forks source link

Add power method and test to numpy backend. #849

Closed wkostuch closed 4 years ago

wkostuch commented 4 years ago

I'm a little unsure about the test method for power.

Due to how np.power works, if you have a negative base and raise that to a non-integer power you receive np.nan. However, since the tests use backend.randn, it's very likely to get a negative value as a base and then try to raise that to a non-integer power, resulting in a runtime warning.

The current "solution" I have is to take the absolute value of the base tensor, ensuring the (-x)**(not-an-integer) situation never occurs. Is there a better way to deal with the np.power quirk or does this suffice?

Appreciate the feedback.

codecov-io commented 4 years ago

Codecov Report

:exclamation: No coverage uploaded for pull request base (master@8451862). Click here to learn what that means. The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff            @@
##             master     #849   +/-   ##
=========================================
  Coverage          ?   98.37%           
=========================================
  Files             ?      129           
  Lines             ?    22124           
  Branches          ?        0           
=========================================
  Hits              ?    21764           
  Misses            ?      360           
  Partials          ?        0           
Impacted Files Coverage Δ
tensornetwork/backends/numpy/numpy_backend.py 98.02% <100.00%> (ø)
tensornetwork/backends/numpy/numpy_backend_test.py 99.29% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 8451862...7acf4d4. Read the comment docs.

alewis commented 4 years ago

So right @mganahl @Thenerdstation I think we need to make a ruling on what to do in this case:

alewis commented 4 years ago

I think taking the absolute value of the input is not the way to go since this is actually more surprising than NaN - at least NaN will make your program crash.

mganahl commented 4 years ago

i think we should stick to whatever the backends are doing. Why only add power to numpy and not the other backends as well?

alewis commented 4 years ago

We're gonna add it to all of them, just not in the same PR

On Sun, Oct 11, 2020, 15:04 Martin Ganahl notifications@github.com wrote:

i think we should stick to whatever the backends are doing. Why only add power to numpy and not the other backends as well?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/google/TensorNetwork/pull/849#issuecomment-706752833, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAINHBXIHRUOYP7K5AGY2DLSKH6TPANCNFSM4SGUB67Q .

alewis commented 4 years ago

If we stick to what the backends are doing, the behaviour of our programs will change as the backend does, which I thought we were trying to avoid. Is that ok do you think?

On Sun, Oct 11, 2020, 15:26 Adam Lewis adam.lws@gmail.com wrote:

We're gonna add it to all of them, just not in the same PR

On Sun, Oct 11, 2020, 15:04 Martin Ganahl notifications@github.com wrote:

i think we should stick to whatever the backends are doing. Why only add power to numpy and not the other backends as well?

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/google/TensorNetwork/pull/849#issuecomment-706752833, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAINHBXIHRUOYP7K5AGY2DLSKH6TPANCNFSM4SGUB67Q .

wkostuch commented 4 years ago

In case this is helpful to have on hand:

TensorFlow's power method throws an error when a negative float is raised to a non-integer power: tensorflow.python.framework.errors_impl.InvalidArgumentError: cannot compute Pow as input #1(zero-based) was expected to be a int32 tensor but is a float tensor [Op:Pow]

PyTorch's power method does the same as Numpy's and silently returns nan values: tensor([0.3481, 4.2450, nan, nan])

mganahl commented 4 years ago

If we stick to what the backends are doing, the behaviour of our programs will change as the backend does, which I thought we were trying to avoid. Is that ok do you think?

Numpy returns nan because raising a negative float to a fractional power changes the dtype to complex. np.power(complex(-1), 0.1) works fine.

As of @wkostuch comment, it seems that all backends are behaving similarly in that respect, so I'd just leave it that way. Otherwise we will have to violate type preservation (which in numpy will lead to propagating dtypes), which can cause a whole bunch of issues itself.

alewis commented 4 years ago

Ok, SGTM

On Sun, Oct 11, 2020, 17:30 Martin Ganahl notifications@github.com wrote:

If we stick to what the backends are doing, the behaviour of our programs will change as the backend does, which I thought we were trying to avoid. Is that ok do you think?

Numpy returns nan because raising a negative float to a fractional power changes the dtype to complex. np.power(complex(-1), 0.1) works fine.

As of @wkostuch https://github.com/wkostuch comment, it seems that all backends are behaving similarly in that respect, so I'd just leave it that way. Otherwise we will have to violate type preservation (which in numpy will lead to propagating dtypes), which can cause a whole bunch of issues itself.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/google/TensorNetwork/pull/849#issuecomment-706771043, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAINHBVTFK6BUPAGD47FS3TSKIPXZANCNFSM4SGUB67Q .

mganahl commented 4 years ago

Hey @wkostuch and @alewis, how are the other backends coming?

wkostuch commented 4 years ago

@mganahl, I was hoping to get around to them both this weekend. I've been swamped with school and exams the last little while so I haven't had time to do this. Sorry 'bout that!