Jutho / TensorOperations.jl

Julia package for tensor contractions and related operations
https://jutho.github.io/TensorOperations.jl/stable/
Other
438 stars 55 forks source link

fix instantiation bug #155

Closed Jutho closed 6 months ago

Jutho commented 7 months ago

Fixes bugs with scalar factors appearing in denominator, e.g.

c = (@tensor x[a,b] * y[b,a]/ 2)
Jutho commented 6 months ago

I am happy with the current state of this, but now it is breaking the DynamicPolynomial tests again. I think the support for this is very finicky because of the decision that DynamicPolynomial is not a Number subtype, and there would be other examples where also the current master or latest version would lead to similar errors.

While this error could be solved by dropping all ::Number restrictions on α and β, I am afraid that this will lead to other issues, so I am considering just disabling these tests and dropping support for DynamicPolynomials. What do you think @lkdvos ?

codecov[bot] commented 6 months ago

Codecov Report

Attention: 18 lines in your changes are missing coverage. Please review.

Comparison is base (e92d25c) 81.57% compared to head (f4431f1) 81.78%.

Files Patch % Lines
src/indexnotation/instantiators.jl 74.00% 13 Missing :warning:
src/indexnotation/analyzers.jl 73.68% 5 Missing :warning:
Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #155 +/- ## ========================================== + Coverage 81.57% 81.78% +0.20% ========================================== Files 25 25 Lines 2133 2146 +13 ========================================== + Hits 1740 1755 +15 + Misses 393 391 -2 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

lkdvos commented 6 months ago

I think I'm definitely in favour of keeping alpha and beta as subtypes of Number. This probably means that for now we should drop the support of the polynomials, as it would be rather hard to keep that without breaking changes...

I can't seem to convince myself that we really need to promote the type of alpha and beta though. Somehow it seems like this is something that could be handled at the level of functions, and not the macro. In that case at the very least we would still support polynomials with scalar factors that are subtypes of Number. (I am not entirely sure how breaking this is though)

Am I right in thinking the reason we promote alpha is to correctly deduce the output number type of the newly allocated tensors? I think this could also be achieved by calling scalartype on the tensors at the right time? Basically, while we should promote TC, I think we should not promote alpha. Does this solve that problem?

Finally, the functions that change from indextuples to labels are used in TBLIS, and might be useful in other backends that have chosen characters as labels instead of integers. I'm fine with removing them here, but I just wanted to mention that.

Jutho commented 6 months ago

Am I right in thinking the reason we promote alpha is to correctly deduce the output number type of the newly allocated tensors? I think this could also be achieved by calling scalartype on the tensors at the right time? Basically, while we should promote TC, I think we should not promote alpha. Does this solve that problem?

Yes, this is how it is actually done for instantiate_generaltensor and instantiate_contraction. The problem is with instantiate_linearcombination. It determines TC based on the whole expression (all terms of the linear combination), but then it creates the tensor by calling instantiate on the first term. But currently the instantiate don't have an argument to pass along TC (since the other instantiate determine TC at the spot of creation). So currently for instantiate_linearcombination I pack TC in replacing alpha with alpha * one(TC). But maybe that is a hacky workaround anyway.

Regarding the label functions, that is a good remark. I just tried to pimp the coverage numbers and noticed these were not used here, not even by the CUDA extension. But I didn't check TensorOperationsTBLIS. Maybe we should add the tests for TensorOperationsTBLIS also to TensorOperations itself.

lkdvos commented 6 months ago

I might try and just add some tests for those functions even without TBLIS, there are some simple unit tests, and they should also be the "inverse operation" of the functions that convert labels to indices, which we can test for.

I think it might be worth considering changing the implementation of the instantiation to allow for the TC without changing alpha... It seems a bit cleaner and more robust...

Jutho commented 6 months ago

Ok DynamicPolynomials support restored with different implementation as discussed. Also label information is back. I will merge if all lights turn green and there are no other comments. I will also already tag a new version so that the bugs are eliminated and also TensorKit can be updated.