Why?
Pytorch onnx unfolds L2* normalization as multiple operators, as opposed to use LpNormalization in onnx.
What?
I feel that it's within the scope of this package to fold multiple operators into a single, no?
Relevance
Some hardware partners, e.g., Qualcomm:QNN/SNPE support LpNormalization. In such cases, model performance might suffer due to an arbitrary choice of Pytorch.
Kindly provide an example to accomplish such task.
Why? Pytorch onnx unfolds L2* normalization as multiple operators, as opposed to use LpNormalization in onnx.
What? I feel that it's within the scope of this package to fold multiple operators into a single, no?
Relevance Some hardware partners, e.g., Qualcomm:QNN/SNPE support LpNormalization. In such cases, model performance might suffer due to an arbitrary choice of Pytorch.
Kindly provide an example to accomplish such task.
*Possibly L1 and others