apache / tvm

Open deep learning compiler stack for cpu, gpu and specialized accelerators
https://tvm.apache.org/
Apache License 2.0
11.82k stars 3.48k forks source link

[Bug][Relax] Inconsistent IRs will be got when Load IRs with R.nn.pad operator #17483

Open jikechao opened 1 month ago

jikechao commented 1 month ago

Actual behavior

error: pad() got multiple values for argument 'pad_width'
 --> <str>:8:55
   |  
 8 |          gv: R.Tensor((2, 130, 30), dtype="float32") = R.nn.pad(x, R.const(0, "int32"), pad_width=[0, 0, 1, 1, 1, 1], pad_mode="constant")
   |                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Steps to reproduce

import tvm
from tvm import relax
from tvm.script import relax as R

@tvm.script.ir_module
class Module:
    @R.function
    def main(x: R.Tensor((2, 128, 28), "float32")) -> R.Tensor((2, 130, 30), "float32"):
        gv: R.Tensor((2, 130, 30), "float32") = R.nn.pad(x, (0, 0, 1, 1, 1, 1))
        return gv

m = Module
m.show()
m_str = m.script()
m2 = tvm.script.from_source(m_str)
m2.show()  # crash

@junrushao @tqchen @Lunderberg

yongwww commented 1 month ago

looks the nn.pad is not roundtrip-able with tvmscript, the ordering the parameters in parser is different from that in printer.

jikechao commented 1 month ago

@yongwww Agree! Should we provide a PR to fix the nn.pad to a roundtrip-able operator?

BTW, R.nn.attention has a similar issue: https://github.com/apache/tvm/issues/17486

yongwww commented 1 month ago

@jikechao I'd appreciate it if you could submit a PR to fix this. No worries if you're busy, please let me know and I'll take care of it.

jikechao commented 1 month ago

@yongwww I am not very familiar with the relevant source code, it would be better to assign it to other colleagues. Thanks!