Open gkrivor opened 1 year ago
Hi @gkrivor I would like to take up this issue, if it's available! Thanks.
Hello @kshitij01042002, are you still working on that issue?
.take
Thank you for looking into this issue! Please let us know if you have any questions or require any help.
Hi, Started working on this,
<!DOCTYPE html>
| ReduceLogSumExp-11 | ReduceLogSumExp-13 | ReduceLogSumExp-18 -- | -- | -- | -- Decription | Computes the log sum exponent of the input tensor's element along the provided axes. The resulting tensor has the same rank as the input if keepdims equals 1. If keepdims equal 0, then the resulted tensor have the reduced dimension pruned. | +Input tensors of rank zero are valid.Reduction over an empty set of values yields minus infinity (if supported by the datatype) or undefined otherwise. | NO CHANGE Attributes | axes : list of intsA list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor. Accepted range is [-r, r-1] where r = rank(data).keepdims : int (default is 1)Keep the reduced dimension or not, default 1 means keep reduced dimension. | NO CHANGE | +noop_with_empty_axes : int (default is 0)Defines behavior if 'axes' is empty. Default behavior with 'false' is to reduce all axes. When axes is empty and this attribute is set to true, input tensor will not be reduced,and the output tensor would be equivalent to input tensor.-axes : list of intsA list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor. Accepted range is [-r, r-1] where r = rank(data). Inputs | data : TAn input tensor. | data (differentiable) : TAn input tensor. | axes (optional, non-differentiable) : tensor(int64)Optional input list of integers, along which to reduce. The default is to reduce over all the dimensions of the input tensor if 'noop_with_empty_axes' is false, else act as an Identity op when 'noop_with_empty_axes' is true. Accepted range is [-r, r-1] where r = rank(data). Outputs | reduced : TReduced output tensor. | reduced (differentiable) : TReduced output tensor. | NO CHANGE Type constraints | T : tensor(uint32), tensor(uint64), tensor(int32), tensor(int64), tensor(float16), tensor(float), tensor(double), tensor(bfloat16)Constrain input and output types to numeric tensors. | NO CHANGE | NO CHANGECreated table with differences, now making on code changes in relevant files namely src/frontends/onnx/frontend/src/op/reduce.cpp , and src/frontends/onnx/tests/onnx_import.in.cpp .I'm also creating a model for test.
However I'm going for a vacation for a week starting tomorrow, so there will be no updates for a week.
Hi @mwilczy , thanks for your contribution! I'm only disagree about types, bfloat introduced in opset-13, opset-1 and opset-11 has same list of supported types.
Hi @mwilczy, just wanted to check if you're still working on this issue. If not, @gkrivor, I'd be keen to take it on if it's available! Thanks
@inbasperu you just took another ticket, please follow that assignment first, if I may ask ;)
Hi, if you guys are on the short deadline by all means go ahead and do this. If not I would like to complete it still. I was just busy with my vacations and a new job
On Wed, 3 Apr 2024 at 14:19, inbasperu @.***> wrote:
Hi @mwilczy https://github.com/mwilczy, just wanted to check if you're still working on this issue. If not, @gkrivor https://github.com/gkrivor, I'd be keen to take it on if it's available! Thanks
— Reply to this email directly, view it on GitHub https://github.com/openvinotoolkit/openvino/issues/20562#issuecomment-2034429434, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFOJHYWH352PLGP7QZTSAKTY3PXW5AVCNFSM6AAAAAA6FM3ZCWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZUGQZDSNBTGQ . You are receiving this because you were mentioned.Message ID: @.***>
please continue, we can wait a little bit longer ;)
Hello @mwilczy, are you still working on that issue? Do you need any help?
Sure help would be fine, I struggle to find time to complete this, so if someone want to pick it no problem
On Mon, 6 May 2024 at 11:04, Przemyslaw Wysocki @.***> wrote:
Hello @mwilczy https://github.com/mwilczy, are you still working on that issue? Do you need any help?
— Reply to this email directly, view it on GitHub https://github.com/openvinotoolkit/openvino/issues/20562#issuecomment-2095509023, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFOJHYV7VS2MZECLL2CL4JTZA5BTHAVCNFSM6AAAAAA6FM3ZCWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOJVGUYDSMBSGM . You are receiving this because you were mentioned.Message ID: @.***>
.take
Thank you for looking into this issue! Please let us know if you have any questions or require any help.
Context
Neural networks are graphs consisting of nodes called operators. Each operator corresponds to a mathematical function, usually described in framework's documentation or an AI standard, such as ONNX. OpenVINO ONNX Frontend is a component responsible for working with ONNX graphs and requires implementation of different ONNX operators in order to use ONNX models. This task requires alignment between OpenVINO ONNX Frontend and original framework implementations of ReduceLogSumExp for next list of opsets: opset 11, opset 13, opset 18 Necessary help will be provided by ONNX Fronted team.
What needs to be done?
First of all, please, take a look on ReduceMax PR for a reference.
Operator details can be found in ONNX Operators More details can be found in ONNX Changelog: opset 11, opset 13, opset 18
Example Pull Requests
No response
Resources
Contact points
@gkrivor
Ticket
No response