Closed ho-oto closed 4 years ago
I don't quite see the benefit, as this makes the syntax clearly much longer. It is just a fact that a macro can be used to define a domain specific language, in which other rules may hold then in the parent language. So I am sure this will not be the only package which implements a macro where than the body of the macro will yield warnings with respect to LanguageServer.jl, despite being correct and valid code. Note that you can also use integers to denote the contraction, and in fact, using NCON style, you don't need to specify the indices in the left hand side.
So instead of
@tensor D[a,b,c] = A[a,e,f,c,f,g]*B[g,b,e] + α*C[c,a,b]
you can also use
@tensor D[:] = A[-1,1,3,-3,3,2]*B[2,-2,1] + α*C[-3,-1,-2]
if you are bothered by the warnings. I recommend reading the manual to learn about NCON style indexing.
The macro syntaxs of
@tensor
and@tensoropt
are very intuitive for human, but not friendly to LanguageServer.jl.For example, VSCode reports many warnings for the sample program in the
README.md
as
LanguageServer.jl interprets
E
and[a-e]
as variables and report many "Missing Reference" errors.In my opinion, the simplest solution to avoid the warning of
[a-e]
is to useSymbol
orString
as indices. In order to avoid the error ofE
, we have to introduce some new syntax. Now, as a workaround, I define a wrapper macro with the following syntax:Do you have any plans to support this kind of syntax? Or is there already a smart workaround?