Open chenyang78 opened 8 months ago
It looks like the problem is that DTYPE_TO_CPP
maps torch.complex64
to complex64
, which is not a valid C++ type. I guess we could use std::complex<double>
instead. However, I am wondering if the whole DTYPE_TO_CPP
mapping is really necessary. It seems to be used in ABI-compatible mode as an alternative to DTYPE_TO_ATEN
, but it seems that these at::
-namespaced types are just aliases to types defined in libc10. As previously discussed, we can safely use c10 headers as long as we're careful not to link model.so against libc10, so maybe using DTYPE_TO_ATEN
is safe after all?
cc @desertfire
re-open because #132810 only fixed it partially. Changes in #132347 can fix this, but those changes broke other tests internally.
🐛 Describe the bug
How to repro:
comment out the line below:
https://github.com/pytorch/pytorch/blob/57a9a64e10b84fa8d932482ae9417aa3fc3fbf44/test/inductor/test_aot_inductor.py#L2293
and run the following command:
Versions
PyTorch version: 2.3.0a0+git57a9a6 Is debug build: True CUDA used to build PyTorch: 12.0 ROCM used to build PyTorch: N/A
cc @ezyang @msaroufim @bdhirsh @anijain2305 @zou3519 @chauhang @desertfire