Closed charanpool closed 5 years ago
I believe the naming has nothing to do with how you determine which tensor is which, but simply the order would decide. According to ONNX documentation, in case of convolution, the first tensor would be the input activation, the second would be the weights, and the third (optional) is the bias. I guess the same is true for all other ops.
But in the case of BatchNormalization, when the order is considered as scale, offset, mean and variance the converted graph is failing.
What is failing exactly? The NNEF parser? Or building the ONNX model? Or running the ONNX model? Does something crash, or give an error message or it just does not give the right results?
Even if the order of the variables were swapped, their shape is the same, only their contents are different, so parsing and building the model should not be a problem. Can you specify your failure more in detail?
We have a unit test case for BatchNormalization that passes fine.
No problems that issue has been resolved. Thank you. Can you specify at which part of the documentation it is mentioned the order is of the prior importance?
I don't think it is explicitly specified, but since inputs are stored as a list (repeated item), what else could it be? And I guess the documentation each operation lists inputs in the order in which they should appear in the list, and optional inputs can be left out from the end.
Can I close this issue?
Thank you. I am closing this.
When the nnef-tools/convert.py file is used to convert from nnef format to onnx format with the following command :
python nnef_tools/convert.py --input-format nnef --output-format onnx --input-model ../nnefParser/parser/cpp/examples/yolov3/graph.nnef
the weights tensors are named variable_N(N represents a serial number). This makes it non-standard to classify those tensors as scale, variance, e.t.c. If this is the case then how can we differentiate between a filter and bias tensors of a convolution input?