onnx / optimizer

Actively maintained ONNX Optimizer
Apache License 2.0
647 stars 90 forks source link

Fold LpNormalization #143

Open escorciav opened 1 year ago

escorciav commented 1 year ago

Why? Pytorch onnx unfolds L2* normalization as multiple operators, as opposed to use LpNormalization in onnx.

What? I feel that it's within the scope of this package to fold multiple operators into a single, no?

Relevance Some hardware partners, e.g., Qualcomm:QNN/SNPE support LpNormalization. In such cases, model performance might suffer due to an arbitrary choice of Pytorch.

Kindly provide an example to accomplish such task.

*Possibly L1 and others