Closed ilan-gold closed 1 year ago
Or rather, my use-case is one real-valued argument, and one complex.
It seems like one option is just flipping the order of operations here, unless I am mistaken.
Ah, I see what's happening. a
inside multiply_in_place
gets its dtype from the first a
, so it's trying to write complex numbers to a float tensor in-place. It should work if you set the dtype of the first a
to torch.cdouble
, with the imaginary part set to 0.
Did that help?
I did not end up trying that, but flipping my operations did, so just went with that. Thank you for the reply though! Much appreicated!
Hello! This seems like it would be a huge boost for me, but I need support for complex numbers - any chance I could make a PR? Right now I'm getting: