Closed gkrivor closed 7 months ago
.take
Thank you for looking into this issue! Please let us know if you have any questions or require any help.
why i got this error and what actions can i take to resolve this error , please guide me
Hello @dev-seek, does this happen when running git submodule update --init
?
A user here says that it can be fixed by changing the method to SSH from HTTPS: Overall this seems to me like some limitation of your network - another user say that it might be due to some complaining proxy or a timeout.
yep its occour when i am using git submodule update --init , and ssh too didnt work , what to do now ? is there any other way ?
Funally Issue resolved ✌
Great! Could you share what was the solution?
Its just the issue of college internet , using my own data solves the problem😅
I didnt get the point 2 of what need to be done completely , did it asking for making changes in the above pointed files or i am wrong somewhere , What it mean by copying existing implementation , its already there and if it asking of making new file then what the need of doing that ?, liittle bit confused , needed some explanation please
Could you please take a look @gkrivor?
@p-wysocki @gkrivor sry for disturbance , but can u plz clear my doubts asap
"This task requires alignment between OpenVINO ONNX Frontend and original framework implementations of BatchNormalization"
how to know the original framework implementation , little bit confused please help .
Does it means that till now we didnt have the implimentation of BatchNormalization which align with original framework implimentations ?
And also frameworks here means (PYTORCH , TENSORFLOW) etc , isn't it ?
@p-wysocki can you please re-send the invite of that discord server as the old invite get expired .
Of course, it has been already been corrected in the template, but existing issues still have the old link in them. The permanent link is https://discord.gg/7pVRxUwdWG
@dev-seek hi! I'll add link to a issue with similar discussion: https://github.com/openvinotoolkit/openvino/issues/20553
Yes, you need to touch both of files. You should achieve something like this : https://github.com/openvinotoolkit/openvino/blob/master/src/frontends/onnx/frontend/src/op/softmax.cpp which will implement a found differences.
Also you will need to extend a tests for exact implementations like in this PR: https://github.com/openvinotoolkit/openvino/pull/21825
I mean you need to add a prototxt file and corresponding unit test to verify a correctness.
Hi @dev-seek, are you still working on that issue?
Yep, need some time On Mon, 29 Jan, 2024, 5:44 pm Przemyslaw Wysocki, @.***> wrote:
Hi @dev-seek https://github.com/dev-seek, are you still working on that issue?
— Reply to this email directly, view it on GitHub https://github.com/openvinotoolkit/openvino/issues/20554#issuecomment-1914571730, or unsubscribe https://github.com/notifications/unsubscribe-auth/AYGSSCPOMMYT2QATOJQGP2LYQ6HB5AVCNFSM6AAAAAA6FMLON2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJUGU3TCNZTGA . You are receiving this because you were mentioned.Message ID: @.***>
Hello @dev-seek, are there any updates?
Hey @p-wysocki really sry for the delay Make this issue open again... I wont geeting sufficient time for this as i am busy in somthing else
.take
Thank you for looking into this issue! Please let us know if you have any questions or require any help.
Here is a List of changes for the versions: | Feature / Version | BatchNormalization-1 | BatchNormalization-6 | BatchNormalization-7 | BatchNormalization-9 | BatchNormalization-14 | BatchNormalization-15 |
---|---|---|---|---|---|---|---|
Attributes | consumed_inputs, epsilon, is_test, momentum, spatial | epsilon, is_test, momentum, spatial | epsilon, momentum, spatial | epsilon, momentum | epsilon, momentum, training_mode | epsilon, momentum, training_mode | |
Outputs | Y, mean, var, saved_mean, saved_var (training); Y (test) | Y, mean, var, saved_mean, saved_var (training); Y (test) | Y, mean, var, saved_mean, saved_var (training); Y (test) | Y, mean, var, saved_mean, saved_var (training); Y (test) | Y, running_mean, running_var (training); Y (test) | Y, running_mean, running_var (training); Y (test) | |
Inputs | X, scale, B, mean, var | X, scale, B, mean, var | X, scale, B, mean, var | X, scale, B, mean, var | X, scale, B, input_fmean, input_var | X, scale, B, input_mean, input_var | |
Type Constraints | T: tensor(float16), tensor(float), tensor(double) | T: tensor(float16), tensor(float), tensor(double) | T: tensor(float16), tensor(float), tensor(double) | T: tensor(float16), tensor(float), tensor(double) | T: tensor(float16), tensor(float), tensor(double), tensor(bfloat16); U: tensor(float16), tensor(float), tensor(double), tensor(bfloat16) | T: tensor(float16), tensor(float), tensor(double), tensor(bfloat16); T1: tensor(float16), tensor(float), tensor(double), tensor(bfloat16); T2: tensor(float16), tensor(float), tensor(double), tensor(bfloat16) | |
Changes | Attrib: consumed_inputs -> list of ints, legacy purposes. | Attrib: Removed consumed_inputs | Attrib: Removed is_test | Attrib: Removed spatial | Attrib: Added training_mode, changed the name of the type constraint (Not the type itself) for mean and var. Changed the name of output variables in training | Changed the name of the type constraint (Not the type itself)for mean and var, and also for scale and B(bias) |
Also, I have a question. Should I add logic in the code splitting training and Inference, even though the training is not implemented? like:
ov::OutputVector batch_norm(const ov::frontend::onnx::Node& node) {
ov::OutputVector inputs{node.get_ov_inputs()};
auto x = inputs.at(0);
auto scale = inputs.at(1);
auto bias = inputs.at(2);
auto mean = inputs.at(3);
auto var = inputs.at(4);
double epsilon{node.get_attribute_value<double>("epsilon", 1e-5)};
double momentum{node.get_attribute_value<double>("momentum", 0.9)};
double training_mode{node.get_attribute_value<double>("training_mode", 0)}; // 0 = Training
if (training_mode == true) {
CHECK_VALID_NODE(node, node.get_outputs_size() == 1, "Training mode of BatchNormalization is not supported.");
// return {std::make_shared<v5::BatchNormTraining>(x, scale, bias, mean, var, epsilon, momentum)}; // not implemented
} else {
CHECK_VALID_NODE(node, node.get_outputs_size() == 1, "Inference set in flag training_mode, but expected more than a single output");
return {std::make_shared<v5::BatchNormInference>(x, scale, bias, mean, var, epsilon)};
}
Or something without that much logic, like:
ov::OutputVector batch_norm(const ov::frontend::onnx::Node& node) {
ov::OutputVector inputs{node.get_ov_inputs()};
auto x = inputs.at(0);
auto scale = inputs.at(1);
auto bias = inputs.at(2);
auto mean = inputs.at(3);
auto var = inputs.at(4);
double epsilon{node.get_attribute_value<double>("epsilon", 1e-5)};
// Attribute "spatial" is ignored, as we only support inference mode of
// BatchNormalization
CHECK_VALID_NODE(node, node.get_outputs_size() == 1, "Training mode of BatchNormalization is not supported.");
return {std::make_shared<v5::BatchNormInference>(x, scale, bias, mean, var, epsilon)};
}
Hi @Bepitic ,
I suggest to extended check in second option: CHECK_VALID_NODE(node, training_mode == false && node.get_outputs_size() == 1, "Training mode of BatchNormalization is not supported.");
Context
Neural networks are graphs consisting of nodes called operators. Each operator corresponds to a mathematical function, usually described in framework's documentation or an AI standard, such as ONNX. OpenVINO ONNX Frontend is a component responsible for working with ONNX graphs and requires implementation of different ONNX operators in order to use ONNX models. This task requires alignment between OpenVINO ONNX Frontend and original framework implementations of BatchNormalization for next list of opsets: opset 6, opset 9, opset 14, opset 15 Necessary help will be provided by ONNX Fronted team.
What needs to be done?
Operator details can be found in ONNX Operators More details can be found in ONNX Changelog: opset 6, opset 9, opset 14, opset 15
Example Pull Requests
No response
Resources
Contact points
@gkrivor
Ticket
No response