google / TensorNetwork

A library for easy and efficient manipulation of tensor networks.
Apache License 2.0
1.82k stars 359 forks source link

Implemented Cholesky decomposition to numpy and tensorflow backends #890

Open LuiGiovanni opened 3 years ago

LuiGiovanni commented 3 years ago

Added the decomposition function following the idea of having a pivot on the tensor, this is in reference to issue #852. which we then reshape into a matrix for the Cholesky function used in NumPy linear algebra. which you may find here for reference.

I Will start working on adding the function to the missing backend which is symmetric since Jax uses NumPy's decomposition file. I await any feedback you may have and that you for the help!

LuiGiovanni commented 3 years ago

@alewis @mganahl Hello! I implemented the Cholesky decomposition to various backends following the idea of a pivot value for the tensors, with their respective tests where I defined a matrix that is assured to be positive-definite and also a hermitian matrix:

[1, 0] [0, 1]

I assumed we won't be validating if the matrices are hermitian or positive-definite since there could be some performance cost in doing so (at least for positive-definite checking) the addition of a hermitian check would be nice to have.

A Pylint error has been occurring in the test runs, but it is somewhere in the code where I haven't made changes to. the file in question is tensornetwork/tn_keras/conv2d_mpo.py which seems to only be affected in versions 3.6 and 3.7, this issue is also occurring in my other PR #889 maybe freezing the version could fix this but this is a lazy solution and would not solve it for all cases, so I will wait for your feedback to see what can be done here. Here is an image of the error for reference.

image

mganahl commented 3 years ago

A Pylint error has been occurring in the test runs, but it is somewhere in the code where I haven't made changes to. the file in question is tensornetwork/tn_keras/conv2d_mpo.py which seems to only be affected in versions 3.6 and 3.7, this issue is also occurring in my other PR #889 maybe freezing the version could fix this but this is a lazy solution and would not solve it for all cases, so I will wait for your feedback to see what can be done here. Here is an image of the error for reference.

image

Thanks for pointing this out. Likely this is due to a new tensorflow release. I'll fix it.

codecov-io commented 3 years ago

Codecov Report

Merging #890 (92c931a) into master (f669725) will increase coverage by 0.00%. The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #890   +/-   ##
=======================================
  Coverage   98.00%   98.00%           
=======================================
  Files         129      129           
  Lines       22625    22665   +40     
=======================================
+ Hits        22173    22213   +40     
  Misses        452      452           
Impacted Files Coverage Δ
tensornetwork/backends/numpy/decompositions.py 100.00% <100.00%> (ø)
...ensornetwork/backends/numpy/decompositions_test.py 98.75% <100.00%> (+0.11%) :arrow_up:
tensornetwork/backends/pytorch/decompositions.py 100.00% <100.00%> (ø)
...sornetwork/backends/pytorch/decompositions_test.py 100.00% <100.00%> (ø)
...ensornetwork/backends/tensorflow/decompositions.py 100.00% <100.00%> (ø)
...network/backends/tensorflow/decompositions_test.py 98.96% <100.00%> (+0.08%) :arrow_up:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update f669725...92c931a. Read the comment docs.

LuiGiovanni commented 3 years ago

Hi there @mganahl @alewis In the end I implemented the test following your requested changes and tried these changes out, I'd like your opinion on them. The biggest difference being:

image

torch.from_numpy(random_ matrix) since it wasn't reading it like a tensor. Let me know what you think. Thank you!

mganahl commented 3 years ago

Hi @LuiGiovanni, once the comments are fixed we can merge this