Samsung / ONE

On-device Neural Engine
Other
435 stars 157 forks source link

[one-cmds] define each unacceptable optimization list for each model type #9260

Open YongseopKim opened 2 years ago

YongseopKim commented 2 years ago

Parent issue: https://github.com/Samsung/ONE/issues/9191

for No.1: one-optimize would have each unacceptable list for each model type and then support it. (Of course, this can be supported by some scripts or programs.)

Let's define each unacceptable optimization list for each model type. The lists would be made by some scripts or programs.

YongseopKim commented 2 years ago

cc/ @seanshpark and @mhs4670go

Any optimization options that should be excluded for models? If you reference issues or code, I'll make it.

mhs4670go commented 2 years ago

Hmm.. we can import tf, tflite, onnx model. And I think there hasn't been unacceptable one. Conversely, there could be recommended option like convert_nchw_to_nhwc with onnx model.

But, there are disallowed options for each backends. For example, some backends don't support an operator that can be generated from some optimization passes. It means those passes shouldn't be run.

@seanshpark @jinevening How about your opinion?

seanshpark commented 2 years ago

Transformation related with NCHWtoNHWC and transpose and others for models from ONNX can harm TF or tflite. But never tested what may happen.

jinevening commented 2 years ago

I skimmed through the options in circle2circle.

As @seanshpark mentioned, convert_nchw_to_nhwc should be applied only to the models from onnx. If not, performance will be severely degraded.

make_batchnorm_gamma_positive should not be applied by default in all types of models (I think it'll be ok to remove the option at all). It can change the inference result.

replace_cw_mul_add_with_depthwise_conv should not be applied by default in all types of models. It conflicts with fused_batchnorm.

nchw_to_nhwc_input_shape nchw_to_nhwc_output_shape should be selectively applied to the models from onnx. I said 'selectively', because it changes model signature (shape of input/output).

YongseopKim commented 2 years ago

@jinevening @seanshpark @mhs4670go, great comments. Thanks!

To summarize your opinions, (if I misunderstand, please let me know.)

Therefore, I can suggest a way like

  1. All options are True as default
  2. Apply the unacceptable list for all models to options
    • 'Apply the unacceptable list' means options that are described in the list are set to False
  3. Apply the unacceptable list for the model type to options
    • tf/tflite/onnx's unacceptable list: convert_nchw_to_nhwc
    • onnx_to_tf/onnx_to_tflite's unacceptable list: (x)
  4. Apply the unacceptable list for the backend to options
    • (This can be done in the future)
    • Lastly, let's apply the list

While investigating, I have a question.

seanshpark commented 2 years ago

The tf/tflite/onnx models can accept all options in the general case However, tf/tflite models that are transformed/converted from onnx models should accept convert_nchw_to_nhwc

Not true. I think you don't understand what is going. I recommand you to

All options are True as default

The whole thing is what we cannot expect circle2circle will work as expected with this.

jinevening commented 2 years ago

How do we know whether this model is transformed from onnx?

AFAIK, there is no way to check whether a model (pb/tflite/circle) is transformed from onnx. We can guess by looking at the model's layout by eyes, but cannot confirm.

If not, it could not be possible to support onnx_to_tf/onnx_to_tflite.

Sorry, I didn't get the point. onnx_to_tf/onnx_to_tflite is used for onnx model.

lemmaa commented 2 years ago

Therefore, I can suggest a way like

  1. All options are True as default
  2. Apply the unacceptable list for all models to options

    • 'Apply the unacceptable list' means options that are described in the list are set to False
  3. Apply the unacceptable list for the model type to options

    • tf/tflite/onnx's unacceptable list: convert_nchw_to_nhwc
    • onnx_to_tf/onnx_to_tflite's unacceptable list: (x)
  4. Apply the unacceptable list for the backend to options

    • (This can be done in the future)
    • Lastly, let's apply the list

I suggest running multiple templates according to the input model type or the target backend type. That simplifies the problem, and the maintenance burden will be less. See also https://github.com/Samsung/ONE/pull/9259#discussion_r897528685 .

YongseopKim commented 2 years ago

Hmm... @mhs4670go 's comment

I think there hasn't been unacceptable one. Conversely, there could be recommended option like convert_nchw_to_nhwc with onnx model.

I understood that almost options would not make problems... I'll think again.

YongseopKim commented 2 years ago

FYI, I and @mhs4670go discussed and his saying is based on

(@mhs4670go, if I am wrong, please fix it.)

YongseopKim commented 2 years ago

If not, it could not be possible to support onnx_to_tf/onnx_to_tflite. Sorry, I didn't get the point. onnx_to_tf/onnx_to_tflite is used for onnx model.

Ah my point was that

YongseopKim commented 2 years ago

perform import models of ONNX / TF / TFLITE with each options and see before-after

The case number = num of models * 2 ^ (num of options)

Umm... I'll try another ways... (or I could misunderstand.)

YongseopKim commented 2 years ago

I suggest running multiple templates according to the input model type or the target backend type. That simplifies the problem, and the maintenance burden will be less. See also https://github.com/Samsung/ONE/pull/9259#discussion_r897528685 .

Okay, I'll consider right now.

mhs4670go commented 2 years ago

There are two modules that test optimization passes with circle2circle - circle2circle-dredd-recipe-test, luci-pass-value-test.

First one checks if a pass works in operator level rather than value level. For example, say, this pass replaces A operator with B operator. Then, the module checks if optimized model has B operator not operator A. But it doesn't do value test.

Second one literally checks of both outputs of their inference results.

  • circle2circle's pass is tested one by one

I thought all passes are being tested but it is not. You can see the test list in compiler/luci-value-test/test.lst. If you want to know which option can be turned on, this is what I come up with but there could be better idea.

  1. Add single value test to luci-pass-value-test for all passes.
  2. Introduce an integration test whose model resources cover all operator that are being optimized by all passes.
seanshpark commented 2 years ago

The case number = num of models * 2 ^ (num of options) Umm... I'll try another ways... (or I could misunderstand.)

I wouldn't suggest to check the Pass to work in this way. I think you totally misunderstood what I am trying to say.

YongseopKim commented 2 years ago

Okay. I'm now totally misunderstanding. I think I'm not the right person for this task. I'll quit it.