Open mganahl opened 4 years ago
TensorFlow can accept Numpy types, but PyTorch can't. That's the reason this line fails for me.
It works with dtype=torch.float64
that's right, but it should be consistent
Eh, sounds more like a pytorch issue than an us issue. I assume you hit this from writing a test?
that's right. I think it would be more convenient to use np-dtypes throughout for initialization, but I'm happy to leave it as is (though the error message that's thrown is quite cryptic as far as I remember, and should at least be fixed).
How about initializing a numpy array and creating a tensor with torch.from_numpy?
initialization of Tensor
is currently done through backend functions (which I think is a good design). Those really should take dtypes matching the backend. We could add a backend function convert_to_dtype
that takes a numpy dtype and converts it to the backend dtype; we then call it on the input dtype before passing it to the backend initialization function.
This is indeed the intended behaviour, like it or not (as it was for kron), but I’m happy to change it. I don’t like silently converting dtypes if the backend doesn’t though, and would suggest we use Martins solution of an explicit conversion.
Adam
On Wed, Sep 9, 2020 at 4:54 PM Martin Ganahl notifications@github.com wrote:
initialization of Tensor is currently done through backend functions (which I think is a good design). Those really should take dtypes matching the backend. We could add a backend function convert_to_dtype that takes a numpy dtype and converts it to the backend dtype; we then call it on the input dtype before passing it to the backend initialization function.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/TensorNetwork/issues/817#issuecomment-689816321, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAINHBVXBXN2X3DPMTRAVQTSE7TQLANCNFSM4Q525WQA .
If you just want to write tests, the various tests in linalg/tests demonstrate some workarounds.
On Wed, Sep 9, 2020 at 5:47 PM Adam Lewis adam.lws@gmail.com wrote:
This is indeed the intended behaviour, like it or not (as it was for kron), but I’m happy to change it. I don’t like silently converting dtypes if the backend doesn’t though, and would suggest we use Martins solution of an explicit conversion.
Adam
On Wed, Sep 9, 2020 at 4:54 PM Martin Ganahl notifications@github.com wrote:
initialization of Tensor is currently done through backend functions (which I think is a good design). Those really should take dtypes matching the backend. We could add a backend function convert_to_dtype that takes a numpy dtype and converts it to the backend dtype; we then call it on the input dtype before passing it to the backend initialization function.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/google/TensorNetwork/issues/817#issuecomment-689816321, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAINHBVXBXN2X3DPMTRAVQTSE7TQLANCNFSM4Q525WQA .
Tensor
initialization methods for pytorch currently break for this type of initialization:(works for JAX and tensorflow) @alewis can you take a look?