Jutho / TensorOperations.jl

Julia package for tensor contractions and related operations
https://jutho.github.io/TensorOperations.jl/stable/
Other
443 stars 56 forks source link

Future Request: Editor friendly macro syntax #84

Closed ho-oto closed 4 years ago

ho-oto commented 4 years ago

The macro syntaxs of @tensor and @tensoropt are very intuitive for human, but not friendly to LanguageServer.jl.

For example, VSCode reports many warnings for the sample program in the README.md

using TensorOperations
α=randn()
A=randn(5,5,5,5,5,5)
B=randn(5,5,5)
C=randn(5,5,5)
D=zeros(5,5,5)
@tensor begin
    D[a,b,c] = A[a,e,f,c,f,g]*B[g,b,e] + α*C[c,a,b]
    E[a,b,c] := A[a,e,f,c,f,g]*B[g,b,e] + α*C[c,a,b]
end

as

スクリーンショット 2020-03-23 16 28 00

LanguageServer.jl interprets E and [a-e] as variables and report many "Missing Reference" errors.

In my opinion, the simplest solution to avoid the warning of [a-e] is to use Symbol or String as indices. In order to avoid the error of E, we have to introduce some new syntax. Now, as a workaround, I define a wrapper macro with the following syntax:

@mytensor D[:a,:b,:c] = A[:a,:e,:f,:c,:f,:g]*B[:g,:b,:e] + α*C[:c,:a,:b]
E = @mytensor ["a","b","c"] ← A["a","e","f","c","f","g"]*B["g","b","e"] + α*C["c","a","b"]

Do you have any plans to support this kind of syntax? Or is there already a smart workaround?

Jutho commented 4 years ago

I don't quite see the benefit, as this makes the syntax clearly much longer. It is just a fact that a macro can be used to define a domain specific language, in which other rules may hold then in the parent language. So I am sure this will not be the only package which implements a macro where than the body of the macro will yield warnings with respect to LanguageServer.jl, despite being correct and valid code. Note that you can also use integers to denote the contraction, and in fact, using NCON style, you don't need to specify the indices in the left hand side.

So instead of

@tensor D[a,b,c] = A[a,e,f,c,f,g]*B[g,b,e] + α*C[c,a,b]

you can also use

@tensor D[:] = A[-1,1,3,-3,3,2]*B[2,-2,1] + α*C[-3,-1,-2]

if you are bothered by the warnings. I recommend reading the manual to learn about NCON style indexing.