Open ebelnikola opened 2 months ago
You are right, the problem arises when the entries are not 'isbits' types, in which case they are initialized as undef. In these cases, the implementation of Strided indeed tries to access the data first, instead of just assigning, throwing an error. In TensorOperations I seem to recall we manually bypassed this by explicitly initialising the arrays of non-isbits types with zeros, presumably we can re-use that functionality here. I'll have a look tomorrow, thanks for bringing this up!
As a small side-note, there will probably be other things that are not fully compatible with BigFloat entries too. I don't think we have any LAPACK fallbacks, so factorisations will probably not work either, and that is less straightforward to fix.
I played around with some fixes (progress here), but it does not seem like it is too straightforward. I cannot say I have enough understanding of the inner workings of Strided to completely fix the problem (@Jutho might know more?), and I am not entirely sure this way of fixing it is ideal, as it requires explicitly initializing all of the BigFloat arrays with zeros. Additionally, we are currently in the process of rewriting a large part of the code-base, so it is not that easy to get a fix out to you quickly...
I see, it is fine, there is no reason to hurry, thank you very much! The package works without problems with DoubleFloats.jl. For me, this is a more suitable number class anyway. As for the decompositions, I added the following workaround into the MatrixAlgebra module:
using GenericLinearAlgebra
function svd!(A::StridedMatrix{<:Number}, alg::Union{SVD,SDD})
res = LinearAlgebra.svd!(A)
return res.U, res.S, res.Vt
end
It seems to work, though I have not yet tested it very well. Could you please tell me if you see any immediate issues with this approach?
P.S. If at any moment support of non-isbits types will be important, I noticed that the same problem persists for addition and, I guess, for any operation that uses similar
.
Thanks for also looking into it. Indeed, that looks like a good solution (which might automatically get incorporated in the near future, as a similar thing is required for CUDA anyways, see https://github.com/Jutho/TensorKit.jl/tree/ld-cuda). I would guess a similar solution is necessary/exists for QR, LQ, etc, for which you could take inspiration from there.
I am definitely interested in the extended precision things, and have not tried the GenericLinearAlgebra myself. If there are any more issues that pop up, feel free to let me know, I would like to keep this issue open and revisit this once I get the CUDA support and the new version up and running.
Hi!
First of all, thank you for this wonderful package. It is a pleasure to use it.
I have noticed that the @tensor macro fails to perform a contraction when tensors contain BigFloat numbers. Here is a minimal example:
In this example, the first contraction goes well and the second throws:
I also checked if the problem persists for plain arrays of BigFloat.
This works. I use Julia 1.10.5.
Update Here is what causes the problem. The function
similar
, when applied to tensors with BigFloat entries, gives a tensor with undefined entries. This leads to the following code failure with analogous error message:It seems that
_mapreduce_kernel!
tries to use tensor elements of C for something.