Open pwuertz opened 1 year ago
Same issue. Is there any plan to support rewrite LayerNormalization(opset 17)when use tf‘s savedmodel format?
Same issue for me.
I have a tentative fix in https://github.com/onnx/tensorflow-onnx/pull/2250. Feel free to take it over
When converting a
tensorflow.keras.layers.LayerNormalization
layer to ONNX,tf2onnx
currently decomposes layer normalizations into rather complex subgraphs with batch norms and more basic building blocks. Inference engines (like TensorRT in the following example) are hardly able to deduce the original layer norm op from the graph and will have to follow the instructions to the letter:Since ONNX version 17 however,
tensorflow.keras.layers.LayerNormalization
layers are directly convertible to ONNX LayerNormalization operators. This of course produces a much simpler and expressive ONNX graph and leaves more room for the inference engine for possible optimizations (ONNX-to-TensorRT example again):The current version of
tf2onnx
doesn't seem to produce those newLayerNormalization
operators yet, at least not withtf2onnx.convert.from_keras(model, opset=17)
.