openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
7.39k stars 2.31k forks source link

[Good First Issue]: Align behavior of ONNX Frontend function ReduceLogSumExp-11, 13, 18 with original framework #20562

Open gkrivor opened 1 year ago

gkrivor commented 1 year ago

Context

Neural networks are graphs consisting of nodes called operators. Each operator corresponds to a mathematical function, usually described in framework's documentation or an AI standard, such as ONNX. OpenVINO ONNX Frontend is a component responsible for working with ONNX graphs and requires implementation of different ONNX operators in order to use ONNX models. This task requires alignment between OpenVINO ONNX Frontend and original framework implementations of ReduceLogSumExp for next list of opsets: opset 11, opset 13, opset 18 Necessary help will be provided by ONNX Fronted team.

What needs to be done?

First of all, please, take a look on ReduceMax PR for a reference.

Operator details can be found in ONNX Operators More details can be found in ONNX Changelog: opset 11, opset 13, opset 18

  1. Function already has a common implementation in OpenVINO. First of all, you need to review a documentation and prepare a table with differences between versions. It could be, for instance, a missing property, extended/reduced coverage of existing property, etc...
  2. Copy existing implementation here to make it aligned with original framework (extend or reduce coverage of a common implementation). Copy of modified implementation should be in a defined opset, or in opset 1 in case it implements oldest implementation. Example of multi-opset operation.
  3. Register the function in ops_bridge.cpp while keeping alphabetical order
  4. Create test model(s) in ONNX models directory. OpenVINO test infrastructure then converts prototxt files to ONNX models - you will use those models later in tests
  5. Add tests covering all use cases here
  6. Check Python xfailed tests to find a test marked as a xfailed for added functionality. If any exist - remove corresponding lines and try to verify by using cmdline "python -m pytest -k name_of_test". More details in adding operators to ONNX Frontend guide

Example Pull Requests

No response

Resources

Contact points

@gkrivor

Ticket

No response

kshitij01042002 commented 9 months ago

Hi @gkrivor I would like to take up this issue, if it's available! Thanks.

p-wysocki commented 9 months ago

Hello @kshitij01042002, are you still working on that issue?

mwilczy commented 8 months ago

.take

github-actions[bot] commented 8 months ago

Thank you for looking into this issue! Please let us know if you have any questions or require any help.

mwilczy commented 8 months ago

Hi, Started working on this,

<!DOCTYPE html>

  | ReduceLogSumExp-11 | ReduceLogSumExp-13 | ReduceLogSumExp-18 -- | -- | -- | -- Decription | Computes the log sum exponent of the input tensor's element along the provided axes. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then the resulted tensor have the reduced dimension pruned. | +Input tensors of rank zero are valid.Reduction over an empty set of values yields minus infinity (if supported by the datatype) or undefined otherwise. | NO CHANGE Attributes | axes : list of intsA list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor. Accepted range is [-r, r-1] where r = rank(data).keepdims : int (default is 1)Keep the reduced dimension or not, default 1 means keep reduced dimension. | NO CHANGE | +noop_with_empty_axes : int (default is 0)Defines behavior if 'axes' is empty. Default behavior with 'false' is to reduce all axes. When axes is empty and this attribute is set to true, input tensor will not be reduced,and the output tensor would be equivalent to input tensor.-axes : list of intsA list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor. Accepted range is [-r, r-1] where r = rank(data). Inputs | data : TAn input tensor. | data (differentiable) : TAn input tensor. | axes (optional, non-differentiable) : tensor(int64)Optional input list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor if 'noop_with_empty_axes' is false, else act as an Identity op when 'noop_with_empty_axes' is true. Accepted range is [-r, r-1] where r = rank(data). Outputs | reduced : TReduced output tensor. | reduced (differentiable) : TReduced output tensor. | NO CHANGE Type constraints | T : tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16)Constrain input and output types to numeric tensors. | NO CHANGE | NO CHANGE

Created table with differences, now making on code changes in relevant files namely src/frontends/onnx/frontend/src/op/reduce.cpp , and src/frontends/onnx/tests/onnx_import.in.cpp .I'm also creating a model for test.

However I'm going for a vacation for a week starting tomorrow, so there will be no updates for a week.

gkrivor commented 8 months ago

Hi @mwilczy , thanks for your contribution! I'm only disagree about types, bfloat introduced in opset-13, opset-1 and opset-11 has same list of supported types.

inbasperu commented 8 months ago

Hi @mwilczy, just wanted to check if you're still working on this issue. If not, @gkrivor, I'd be keen to take it on if it's available! Thanks

mlukasze commented 8 months ago

@inbasperu you just took another ticket, please follow that assignment first, if I may ask ;)

mwilczy commented 8 months ago

Hi, if you guys are on the short deadline by all means go ahead and do this. If not I would like to complete it still. I was just busy with my vacations and a new job

On Wed, 3 Apr 2024 at 14:19, inbasperu @.***> wrote:

Hi @mwilczy https://github.com/mwilczy, just wanted to check if you're still working on this issue. If not, @gkrivor https://github.com/gkrivor, I'd be keen to take it on if it's available! Thanks

— Reply to this email directly, view it on GitHub https://github.com/openvinotoolkit/openvino/issues/20562#issuecomment-2034429434, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFOJHYWH352PLGP7QZTSAKTY3PXW5AVCNFSM6AAAAAA6FM3ZCWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZUGQZDSNBTGQ . You are receiving this because you were mentioned.Message ID: @.***>

mlukasze commented 8 months ago

please continue, we can wait a little bit longer ;)

p-wysocki commented 7 months ago

Hello @mwilczy, are you still working on that issue? Do you need any help?

mwilczy commented 7 months ago

Sure help would be fine, I struggle to find time to complete this, so if someone want to pick it no problem

On Mon, 6 May 2024 at 11:04, Przemyslaw Wysocki @.***> wrote:

Hello @mwilczy https://github.com/mwilczy, are you still working on that issue? Do you need any help?

— Reply to this email directly, view it on GitHub https://github.com/openvinotoolkit/openvino/issues/20562#issuecomment-2095509023, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFOJHYV7VS2MZECLL2CL4JTZA5BTHAVCNFSM6AAAAAA6FM3ZCWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOJVGUYDSMBSGM . You are receiving this because you were mentioned.Message ID: @.***>

gkrivor commented 4 months ago

.take

github-actions[bot] commented 4 months ago

Thank you for looking into this issue! Please let us know if you have any questions or require any help.