apache / tvm

Open deep learning compiler stack for cpu, gpu and specialized accelerators
https://tvm.apache.org/
Apache License 2.0
11.41k stars 3.4k forks source link

[Relax][BlockBuilder] Use PrimValue to provide tir_vars #17087

Open Lunderberg opened 3 weeks ago

Lunderberg commented 3 weeks ago

Prior to this commit, if a TIR variable was required to compute the output of BlockBuilder.call_te, but that TIR variable could not be inferred from the shape of any tensor arguments, it would be provided in an optional tir_vars argument to R.call_tir. In C++, this would be then be accessed as an optional call->args[2].as<ShapeExprNode>().

This extra argument can cause unexpected bugs. For example, the bug that was fixed in https://github.com/apache/tvm/pull/17086 was caused by RewriteDataflowReshape identifying the output buffer using prim_func->buffer_map.Get(prim_func->params.back()), which is only correct if tir_vars is empty. Rather than fixing these issues as they come up, it would be better to make the general Relax guarantees stronger by removing the tir_vars argument altogether.

Use of extra R.shape parameter to specify additional tir_vars predates the existence of relax::PrimValue, and is no longer required. This commit updates BlockBuilder.call_te to use additional relax.PrimValue arguments to handle symbolic values that cannot be inferred from tensor shapes, rather than tir_vars.

masahi commented 3 weeks ago

cc @tqchen @Hzfengsy