Open gorkemalgan opened 6 months ago
To temporarily unblock, try to_edge(exir.EdgeCompileConfig(_check_ir_validity=False))
while we're working on address it.
cc: @larryliu0820
Thanks for the suggestion but now it throws the following error:
Exception has occurred: SpecViolationError
These operators are taking Tensor inputs with mismatched dtypes: defaultdict(<class 'dict'>, {
From which step did you get this error?
I get it in the last step
example_args = (torch.randn(1, 4, 1000, 1504),)
pre_autograd_aten_dialect = capture_pre_autograd_graph(model, example_args)
aten_dialect: ExportedProgram = export(pre_autograd_aten_dialect, example_args)
edge_program: EdgeProgramManager = to_edge(aten_dialect, compile_config=EdgeCompileConfig(_check_ir_validity=False)) # to_edge(aten_dialect)
I think this is an valid error. It is saying tensor dtype torch.complex64
is not accepted by edge dialect (hence not accepted by ExecuTorch). We are not supporting this dtype now and is working on to get it supported.
torch.fft.rfft
returns torch.complex64
dtype tensor so as I understand there is no way to run it with ExecuTorch for now? Correct me if I am wrong, but I will need to wait till torch.complex64
type is supported by ExecuTorch
torch.fft.rfft
returnstorch.complex64
dtype tensor so as I understand there is no way to run it with ExecuTorch for now? Correct me if I am wrong, but I will need to wait tilltorch.complex64
type is supported by ExecuTorch
Yep I don’t think we have complex number support in ExecuTorch runtime right now
@guangy10 if we have triaging and followup meeting, I would like to follow up on this issue specifically.
@gorkemalgan note that there are a few issues here. 1) complex dtype not supported 2) implementation of *fft not available
I think we already have the complex data type issue filed separately here: https://github.com/pytorch/executorch/issues/886
@gorkemalgan Closing because it's a duplicate of https://github.com/pytorch/executorch/issues/886. If this is a separate issue, please do open it again
This issue is not a duplicate of #886. In addition to complex data type support requested in #886 this issue requires Executorch implementation for torch.fft operator as well.
fft
I believe torch.fft is not part of core aten. If it is not, it'll decomposed to the core aten ops. Maybe @larryliu0820 or @SS-JIA can confirm
With FFT related ops, we decided to defer decision for now. However, seeing that there is a use-case for them we will need to revisit whether those should be core. To my knowledge you can't really decompose fft.
However, even before considering adding fft ops, we need to think about how we're going to support complex data types. Is there a plan in place for this currently?
Hi, I'm also eager to have support of FFT functions. In my case, I'm getting
raise UnsupportedOperatorException(func)
torch._subclasses.fake_tensor.UnsupportedOperatorException: aten._fft_c2r.default
The above exception was the direct cause of the following exception:
...
raise UnsupportedOperatorException(func)
RuntimeError: Failed running call_function <built-in method istft of type object at 0x7f3704de8a40>(*(FakeTensor(..., size=(s0, 513, 50), dtype=torch.complex64), 1024, 256, 1024, FakeTensor(..., size=(1024,))), **{'center': True}):
aten._fft_c2r.default
During handling of the above exception, another exception occurred:
....
raise Unsupported(msg)
torch._dynamo.exc.Unsupported: unsupported operator: aten._fft_c2r.default (see https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit#heading=h.64r4npvq0w0 for how to fix)
The above exception was the direct cause of the following exception:
....
raise UserError(UserErrorType.DYNAMIC_CONTROL_FLOW, str(e)) from e
torch._dynamo.exc.UserError: speculate_subgraph: while introspecting cond, we were unable to trace function `tf_istft` into a single graph. This means that Dynamo was unable to prove safety for this API and will fall back to eager-mode PyTorch, which could lead to a slowdown. Scroll up for the stack trace of the initial exception. The reason was: unsupported operator: aten._fft_c2r.default (see https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit#heading=h.64r4npvq0w0 for how to fix)
I have a custom model that uses
torch.fft.rfftn
andtorch.fft.irfftn
. I can successfully runcapture_pre_autograd_graph
andexport
(only with static sizes though). But, when I runto_edge
I get the following error:Operator torch._ops.aten._fft_r2c.default is not Aten Canonical.