Open naoyam opened 2 years ago
Just some thoughts:
// This schedule passes
TransformPropagator::from(tv2);
tv3->split(-1, 5);
tv1->computeAt(tv3, -1);
view
also changes the dimensions of the tensor, so it has an effect even if its domain transformations are ignored.
- We need to propagate the view transformations to the dependent tensors before scheduling. Alternative: We can schedule this fusion if we select the view tensor as the reference tensor in the pointwise scheduler.
// This schedule passes TransformPropagator::from(tv2); tv3->split(-1, 5); tv1->computeAt(tv3, -1);
Yes, that would work, but my question is what should be done if the view tensor is not selected as the reference. Or, what about if there are multiple view tensors?
There's some inconsistency with this fusion:
tv1
is an input toview
andadd
. Before the computeAt, the fusion looks like:With
tv1->computeAt(tv3, -1)
,tv2
is transformed liketv3
, effectively cancelling the theview
transformation.I'm not really sure if this makes sense as the effect of
view
disappears. Maybe fine as long as the shapes of input and output tensors are not incorrectly changed.However,
T2
is in an inconsistent state as it keeps its "old" rfactor domain.Note that the fusion fails when lowered due to the inconsistency.
So, my questions are: