Jutho / TensorOperations.jl

Julia package for tensor contractions and related operations
https://jutho.github.io/TensorOperations.jl/stable/
Other
443 stars 56 forks source link

Specify output type #129

Closed projekter closed 1 year ago

projekter commented 1 year ago

This is probably loosely related to #77. It would be great if you could add a parameter to the @tensor macro with := syntax that specifies the output type. I often work with contractions of tensors whose entries are polynomials - or, rather, they are PolyVars or Terms - but then, of course, the contraction is a Polynomial. I guess that TensorOperators has no way of knowing that the output type has to change, which is perfectly fine, but if I could simply specify the output type directly, this would remedy the problem. The only way to do this at the moment is to allocate and initialize the output manually and use the = syntax. This is not much work, but sometimes unnecessary clutter.

lkdvos commented 1 year ago

Hi @projekter , would you happen to have a small example of what you have in mind? We are currently working on a rewrite for the next version which would allow for more flexibility, and I could look into if it is possible to include your use-case.

projekter commented 1 year ago

Ok, so here's an example of what fails (admittedly very simple and no need for TensorOperations here):

using DynamicPolynomials, TensorOperations
@polyvar a[1:2,1:2] b[1:2,1:2];
@tensor c[i,k] := a[i,j]*b[j,k]

This leads to InexactError: convert(Term{true, Int64}, a[1,1]*b[1,1] + a[1,2]*b[2,1]). Now there is currently one way to circumvent this issue: Preallocate the output with the correct type.

c = Matrix{polynomialtype(eltype(a))}(undef, 2, 2)
@tensor c[i,k] = a[i,j]*b[j,k]

What I propose is for the := syntax to allow for a specification of the output type:

@tensor polynomialtype(eltype(a)) c[i,k] := a[i,j]*b[j,k]

Another alternative that just came to my mind due to this simple example is to use the same interface as matrix multiplication does, for a * b actually works. This means that := could actually try to determine the output type via Base.promote_op(LinearAlgebra.matprod, eltype(A), eltype(B)) - of course, this would then have to be iterated over all contracted tensors.

lkdvos commented 1 year ago

Thanks for the quick reply. I'll have a look at what's feasible, I like the idea of being able to specify the output type via :=, which might even work without a major overhaul of the macro. I will keep you posted!

lkdvos commented 1 year ago

So, as an update: It seems that indeed the functionality of promote_op can solve most of the problems. I think I have gotten a rudimentary version to work, see this branch. At least locally, this runs for me:

using DynamicPolynomials
using TensorOperations
# Should probably make Polynomials compatible with the full VectorInterface
using VectorInterface: VectorInterface
function VectorInterface.scalartype(::Type{T}) where {T<:Union{<:PolyVar,<:Monomial,
                                                               <:Polynomial,<:Term}}
    return T
end

@polyvar a[1:2, 1:2] b[1:2, 1:2]
operationbackend!(StridedBackend(false))
@tensor c[i, k] := a[i, j] * b[j, k]
@tensor c[i, k] := a[i, j] * b[j, k] + 2 * a[i, k]

note that the linked version is still very much a WIP, but I'd be very happy to include some of your cases in the tests, such that this would be supported when the release is ready.