Open YongseopKim opened 2 years ago
cc/ @seanshpark and @mhs4670go
Any optimization options that should be excluded for models? If you reference issues or code, I'll make it.
Hmm.. we can import tf
, tflite
, onnx
model. And I think there hasn't been unacceptable one. Conversely, there could be recommended option like convert_nchw_to_nhwc
with onnx
model.
But, there are disallowed options for each backends. For example, some backends don't support an operator that can be generated from some optimization passes. It means those passes shouldn't be run.
@seanshpark @jinevening How about your opinion?
Transformation related with NCHWtoNHWC and transpose and others for models from ONNX can harm TF or tflite. But never tested what may happen.
I skimmed through the options in circle2circle.
As @seanshpark mentioned, convert_nchw_to_nhwc
should be applied only to the models from onnx. If not, performance will be severely degraded.
make_batchnorm_gamma_positive
should not be applied by default in all types of models (I think it'll be ok to remove the option at all). It can change the inference result.
replace_cw_mul_add_with_depthwise_conv
should not be applied by default in all types of models. It conflicts with fused_batchnorm
.
nchw_to_nhwc_input_shape
nchw_to_nhwc_output_shape
should be selectively applied to the models from onnx. I said 'selectively', because it changes model signature (shape of input/output).
@jinevening @seanshpark @mhs4670go, great comments. Thanks!
To summarize your opinions, (if I misunderstand, please let me know.)
make_batchnorm_gamma_positive
, replace_cw_mul_add_with_depthwise_conv
, nchw_to_nhwc_input_shape
, nchw_to_nhwc_output_shape
convert_nchw_to_nhwc
Therefore, I can suggest a way like
convert_nchw_to_nhwc
While investigating, I have a question.
The tf/tflite/onnx models can accept all options in the general case However, tf/tflite models that are transformed/converted from onnx models should accept convert_nchw_to_nhwc
Not true. I think you don't understand what is going. I recommand you to
compiler/luci/pass
All options are True as default
The whole thing is what we cannot expect circle2circle will work as expected with this.
How do we know whether this model is transformed from onnx?
AFAIK, there is no way to check whether a model (pb/tflite/circle) is transformed from onnx. We can guess by looking at the model's layout by eyes, but cannot confirm.
If not, it could not be possible to support onnx_to_tf/onnx_to_tflite.
Sorry, I didn't get the point. onnx_to_tf/onnx_to_tflite is used for onnx model.
Therefore, I can suggest a way like
- All options are True as default
Apply the unacceptable list for all models to options
- 'Apply the unacceptable list' means options that are described in the list are set to False
Apply the unacceptable list for the model type to options
- tf/tflite/onnx's unacceptable list:
convert_nchw_to_nhwc
- onnx_to_tf/onnx_to_tflite's unacceptable list: (x)
Apply the unacceptable list for the backend to options
- (This can be done in the future)
- Lastly, let's apply the list
I suggest running multiple templates according to the input model type or the target backend type. That simplifies the problem, and the maintenance burden will be less. See also https://github.com/Samsung/ONE/pull/9259#discussion_r897528685 .
Hmm... @mhs4670go 's comment
I think there hasn't been unacceptable one. Conversely, there could be recommended option like convert_nchw_to_nhwc with onnx model.
I understood that almost options would not make problems... I'll think again.
FYI, I and @mhs4670go discussed and his saying is based on
circle2circle
's pass
is tested one by onethere hasn't been unacceptable one
(@mhs4670go, if I am wrong, please fix it.)
If not, it could not be possible to support onnx_to_tf/onnx_to_tflite. Sorry, I didn't get the point. onnx_to_tf/onnx_to_tflite is used for onnx model.
Ah my point was that
one-optimizer
doesn't know whether the tf/tflite model comes from onnx or not,perform import models of ONNX / TF / TFLITE with each options and see before-after
The case number = num of models * 2 ^ (num of options)
Umm... I'll try another ways... (or I could misunderstand.)
I suggest running multiple templates according to the input model type or the target backend type. That simplifies the problem, and the maintenance burden will be less. See also https://github.com/Samsung/ONE/pull/9259#discussion_r897528685 .
Okay, I'll consider right now.
There are two modules that test optimization passes with circle2circle
- circle2circle-dredd-recipe-test
, luci-pass-value-test
.
First one checks if a pass works in operator level rather than value level. For example, say, this pass replaces A operator with B operator. Then, the module checks if optimized model has B operator not operator A. But it doesn't do value test.
Second one literally checks of both outputs of their inference results.
circle2circle
'spass
is tested one by one
I thought all passes are being tested but it is not. You can see the test list in compiler/luci-value-test/test.lst
. If you want to know which option can be turned on, this is what I come up with but there could be better idea.
luci-pass-value-test
for all passes.The case number = num of models * 2 ^ (num of options) Umm... I'll try another ways... (or I could misunderstand.)
I wouldn't suggest to check the Pass to work in this way. I think you totally misunderstood what I am trying to say.
Okay. I'm now totally misunderstanding. I think I'm not the right person for this task. I'll quit it.
Parent issue: https://github.com/Samsung/ONE/issues/9191
Let's define each unacceptable optimization list for each model type. The lists would be made by some scripts or programs.