microsoft / onnxscript

ONNX Script enables developers to naturally author ONNX functions and models using a subset of Python.
https://onnxscript.ai/
MIT License
270 stars 50 forks source link

Unused sequence in _aten_as_strided_onnx #1310

Open yuanyao-nv opened 5 months ago

yuanyao-nv commented 5 months ago

In the onnxscript definition of _aten_as_strided_onnx() there is an unused loop-carried dependency called one_seq. It doesn't seem to be used as a return variable of the function and gives rise to graphs with sequence ops that cannot be optimized away with onnxscript's current pattern matching capability. Should it be removed?

justinchuby commented 5 months ago

Looks like https://github.com/microsoft/onnxscript/blob/e6ea34f2540680d1410039cf27693f490d1d9dfe/onnxscript/function_libs/torch_lib/ops/core.py#L817-L821 is updated if the loop runs more than one time? Ideally the own function is replaced by the runtime with a more efficient implementation, but it can also be flattened out and cleaned up. Any suggestions?

yuanyao-nv commented 5 months ago

Is my understanding correct that this loop only needs to keep track of the length of one_seq to update the shape variable? In this case can we simply use a variable to keep track of the length instead of using sequences?

yuanyao-nv commented 5 months ago

A related issue: the case of _aten_unfold_onnx() seems to have a similar construction with Sequences ops appearing both inside and outside the loop body, except that there doesn't seem to be an easy way to rewrite the loop-carried dependency seq_result into a non-sequence variable. If we want to remove the appearance of sequences from the source, I'm wondering in this case if it's possible to pre-determine (statically determine?) the unfolded tensor size and only fill in the corresponding elements in the loop body instead of using sequences to concat?

justinchuby commented 5 months ago

I think so but need to verify. It would obviously be good if we can remove the usage of sequences here. @BowenBao do you have any comments?