Samsung / ONE

On-device Neural Engine
Other
440 stars 157 forks source link

Compiler FE: Direct ONNX support #6499

Open lucenticus opened 3 years ago

lucenticus commented 3 years ago

As far as I understand, the current version of ONE compiler supports ONNX models by using onnx-tensoflow. In my understanding, we convert onnx model to tflite first and after that convert to circle @seanshpark I just wondering whether we plan to support direct conversion from onnx to cirlce? If so, then I think we can contribute into this task. Additionally, probably you need some help in support some operators in onnx-tensorflow?

CC: @lemmaa

seanshpark commented 3 years ago

whether we plan to support direct conversion from onnx to cirlce?

This is a way to provide faster(compared to onnx -> tensorflow -> tensorflow lite -> circle) conversion :) As we already have onnx2circle project in compiler from previous years, we can continue on this project. The question is how would be the scheduler to provide the operators; (1) for well know models (2) our in-house models (3) all the operators in onnx-1.7 and maybe next onnx-1.8, and how to test them (per operators, small models, ...) and providel well(?) conversion transforms... As I didn't contritubte to this project I couldn't predict this so began with onnx-tensorflow...

So, do we have plan? What I could say is there is NO "There is no plan"; it's matter of resource and schedule... I personally think that it's good to provide multiple solutions for our customers :)

Additionally, probably you need some help in support some operators in onnx-tensorflow?

Currently known issues with our in-house models are solved (we may have unknown issues...) Next step is to upgrade when onnx-1.8 is released with latest TensorFlow; and upgrade TFlite schema. And we may have to solve problems from our in-house clients when they really work with ~out~ our compiler.

lucenticus commented 3 years ago

Thanks a lot for your feedback. If I right understood, there is no known issues with in-house models, but if there is any issues with these models, please let me know. I had offline discussion regarding onnx2circle with @binarman and we decided to check whether this project might be useful as a basis for direct ONNX support. One of the main issue that onnx2circle supports quite old version of ONNX. Anyway, let's return to this discussion after investigation of onnx2circle current status. I agree with you that it is good to have multiple solutions for our end users:)

binarman commented 3 years ago

I have looked at onnx2circle tool and some other projects related to ONNX and have several possible options how we can implement direct ONNX support =)

For the purpose of clear definition in this text:

What we have

ONE repo

Third party

onnx2mlir transformer link. This is essentially ONNX compiler capable to produce native code from ONNX model using llvm infrastructure.

Interesting part of this project is ONNX importer that claims full support of ONNX 1.6 right now and 1.8 in near future.

Solutions

Common parts of every possible ONNX->Circle transformer:

List of existing importers: mir_onnx, onnx2mlir List of existing exporters: exo, luci Only one converter: mir2loco

Using these components I can imagine 4 possible combinations:

effort_diagram

Color Status
Red Component does not exist yet
Yellow Component have to be updated
Green Component is complete or need minimal changes

Conclusion

In my opinion mlir -> luci is the most promising solution so far, it contains two well tested components. As bonus MLIR in theory opens some interesting possibilities with integration with other projects.

P.S. Need to investigate onnx2mlir project more to estimate how much efforts we need to work with it. This estimates are relative and tells nothing about "absolute" time.

P.S.S There are also possibility to implement brand new importer or exporter, but I can not think of obvious benefits that this will bring. Some restrictions can revive this approach though (for example if we can not bring third party code in compiler).