google / TensorNetwork

A library for easy and efficient manipulation of tensor networks.
Apache License 2.0
1.81k stars 356 forks source link

Tensor initialization with dtype and pytorch backend breaks #817

Open mganahl opened 4 years ago

mganahl commented 4 years ago

Tensor initialization methods for pytorch currently break for this type of initialization:

t = tn.random_uniform((2,2), dtype=np.float64, backend='pytorch') 

(works for JAX and tensorflow) @alewis can you take a look?

gaxler commented 4 years ago

TensorFlow can accept Numpy types, but PyTorch can't. That's the reason this line fails for me.

It works with dtype=torch.float64

mganahl commented 4 years ago

that's right, but it should be consistent

chaserileyroberts commented 4 years ago

Eh, sounds more like a pytorch issue than an us issue. I assume you hit this from writing a test?

mganahl commented 4 years ago

that's right. I think it would be more convenient to use np-dtypes throughout for initialization, but I'm happy to leave it as is (though the error message that's thrown is quite cryptic as far as I remember, and should at least be fixed).

gaxler commented 4 years ago

How about initializing a numpy array and creating a tensor with torch.from_numpy?

mganahl commented 4 years ago

initialization of Tensor is currently done through backend functions (which I think is a good design). Those really should take dtypes matching the backend. We could add a backend function convert_to_dtype that takes a numpy dtype and converts it to the backend dtype; we then call it on the input dtype before passing it to the backend initialization function.

alewis commented 4 years ago

This is indeed the intended behaviour, like it or not (as it was for kron), but I’m happy to change it. I don’t like silently converting dtypes if the backend doesn’t though, and would suggest we use Martins solution of an explicit conversion.

Adam

On Wed, Sep 9, 2020 at 4:54 PM Martin Ganahl notifications@github.com wrote:

initialization of Tensor is currently done through backend functions (which I think is a good design). Those really should take dtypes matching the backend. We could add a backend function convert_to_dtype that takes a numpy dtype and converts it to the backend dtype; we then call it on the input dtype before passing it to the backend initialization function.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/TensorNetwork/issues/817#issuecomment-689816321, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAINHBVXBXN2X3DPMTRAVQTSE7TQLANCNFSM4Q525WQA .

alewis commented 4 years ago

If you just want to write tests, the various tests in linalg/tests demonstrate some workarounds.

On Wed, Sep 9, 2020 at 5:47 PM Adam Lewis adam.lws@gmail.com wrote:

This is indeed the intended behaviour, like it or not (as it was for kron), but I’m happy to change it. I don’t like silently converting dtypes if the backend doesn’t though, and would suggest we use Martins solution of an explicit conversion.

Adam

On Wed, Sep 9, 2020 at 4:54 PM Martin Ganahl notifications@github.com wrote:

initialization of Tensor is currently done through backend functions (which I think is a good design). Those really should take dtypes matching the backend. We could add a backend function convert_to_dtype that takes a numpy dtype and converts it to the backend dtype; we then call it on the input dtype before passing it to the backend initialization function.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/TensorNetwork/issues/817#issuecomment-689816321, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAINHBVXBXN2X3DPMTRAVQTSE7TQLANCNFSM4Q525WQA .