Open ryanmrichard opened 3 years ago
problem is that the expression layer already supports some ToT products, namely where inner or outer index product is a pure contraction (free and contracted indices only) or a pure Hadamard (fused indices only). New einsum
in #285 uses that in one of the paths as far as I recall (@asadchev plz correct me if I'm wrong). So I don't see how to "disable" operator* for those cases ... perhaps I'm not following exactly what you are trying to do?
@evaleev I think ToT times ToT can go through operator*
, but non-ToT times ToT can't. Regardless, I forgot that operator*
already worked for some cases so my redirection solution won't work.
Basically I was hoping to write a generic orbital transform function which superficially looks like:
template<typename ResultType, typename TransformType, typename TensorType>
auto transform(TransformType&& C, TensorType&& t) {
// function which works out what the annotations are
auto [result_annotation, lhs_annotation, rhs_annotation] = make_annotations();
ResultType result;
result(result_annotation) = C(lhs_annotation) * t(rhs_annotation);
return result;
}
I can write it in terms of einsum
, but assumed that wouldn't be as efficient for non-ToTs.
New einsum defers to operator* all it can, so only mixed hadamard-contract products go through it. You should be able to use it no problem
On Jun 16, 2021 9:35 AM, "Ryan Richard" @.***> wrote:
@evaleev https://github.com/evaleev I think ToT times ToT can go through operator, but non-ToT times ToT can't. Regardless, I forgot that operator already worked for some cases so my redirection solution won't work.
Basically I was hoping to write a generic orbital transform function which superficially looks like:
template<typename ResultType, typename TransformType, typename TensorType>auto transform(TransformType&& C, TensorType&& t) { // function which works out what the annotations are auto [result_annotation, lhs_annotation, rhs_annotation] = make_annotations(); ResultType result; result(result_annotation) = C(lhs_annotation) * t(rhs_annotation); return result; }
I can write it in terms of einsum, but assumed that wouldn't be as efficient for non-ToTs.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/ValeevGroup/tiledarray/issues/282#issuecomment-862385551, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABH62KA4DWQ2EN4CPCNATFTTTCSBBANCNFSM46YBKBMQ .
Presently multiplying ToT requires calling
einsum
. Unfortunately that makes it hard to write generic functions. I originally addedeinsum
because I couldn't figure out how to get ToT multiplication to slide into the existing expression layer. I haven't prototyped it, but maybe you have aToTMultiplication
class which is returned when either tensor is a tot (you can deduce if either side ofoperator*
is a ToT based on the tile types). It could then calleinsum
when it is assigned to aTsrExpr
. The reason I'm thinking a new class is because the left and right sides of the expression generating theToTMultiplication
instance would have to just be annotated tensors, and you would have to immediately assign it to aTsrExpr
(so it doesn't fully participate in the expression layer).This could be somewhat related to #224 in that with general tensor contractions you may also be restricting non-ToT multiplications in a similar manner.
If the above plan sounds reasonable I could try taking a stab at this.