openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
6.78k stars 2.16k forks source link

[Good First Issue][TF FE]: Support complex tensors for Rsqrt operations #23238

Open rkazants opened 6 months ago

rkazants commented 6 months ago

Context

OpenVINO component responsible for support of TensorFlow models is called as TensorFlow Frontend (TF FE). TF FE converts a model represented in TensorFlow opset to a model in OpenVINO opset. Some audio models use tensors of complex type. Complex type tensor is a tensor that has elements of complex type. For example, 1D tensor with three elements x = [1+2j, 2, -2j].

For supporting Rsqrt operation on complex type tensor, you need to extend the corresponding loader for Rsqrt.

What needs to be done?

The existing loader for Rsqrt needs to be extended by propagating ComplexTypeMark from input to output and to represent output complex type tensor as a floating-point type tensor with auxiliary dimension that concatenates real and imaginary parts of complex tensor. To validate the extension, the corresponding layer test needs to be updated with complex tensor cases.

Here is an example of how to extend Reshape loader to support complex type tensors:

OutputVector translate_reshape_op(const NodeContext& node) {
    default_op_checks(node, 2, {"Reshape"}, true);
    auto tensor = node.get_input(0);
    auto complex_type_mark = as_type_ptr<ComplexTypeMark>(tensor.get_node_shared_ptr());
    auto shape = node.get_input(1);
    if (complex_type_mark) {
        element::Type complex_part_type = complex_type_mark->get_complex_part_type();
        tensor = complex_type_mark->input_value(0);

        OutputVector concat_inputs;
        concat_inputs.push_back(shape);
        concat_inputs.push_back(make_shared<v0::Constant>(shape.get_element_type(), Shape{1}, 2));

        auto concat = make_shared<v0::Concat>(concat_inputs, 0);
        auto reshape = make_shared<v1::Reshape>(tensor, concat, false);
        set_node_name(node.get_name(), reshape);
        auto complex_reshape = make_shared<ComplexTypeMark>(reshape, complex_part_type);
        return {complex_reshape->output(0)};
    }

    auto reshape = make_shared<v1::Reshape>(tensor, shape, false);
    set_node_name(node.get_name(), reshape);
    return {reshape};
}

Since OpenVINO does not have native support of complex tensors, we handle complex type in intermediate layers by representing them as a floating-point type with additional dimension (specially created) to store real and imaginary parts of the original complex tensor so slicing by the last dimension will give either real or imaginary parts: x[...,0] - real and x[...,1] - imaginary parts.

On the first step, we update default_op_checks with true flag to indicate that loader for Reshape operation now handles complex tensors:

default_op_checks(node, 2, {"Reshape"}, true);

Secondly, we check if complex type mark exists by anticipated inputs. This mark indicates that input tensor of complex type:

auto complex_type_mark = as_type_ptr<ComplexTypeMark>(tensor.get_node_shared_ptr());

Thirdly, we retrieve a floating-point tensor (with additional dimension to store real and imaginary parts) simulating complex tensor:

tensor = complex_type_mark->input_value(0);

After that, we implement conversion for Reshape for this particular case. Since a floating-point tensor simulating complex tensor has additional dimension equal to 2, we update input target shape by appending 2 value and perform reshape on a floating-point tensor simulating complex tensor.

Finally, since Reshape should produce complex tensor by output we insert a new mark ComplexTypeMark into the output.

To validate support of complex tensors for Reshape, the new layer test TestComplexReshape was added.

Example how to run the layer test:

export TEST_DEVICE=CPU
cd openvino/tests/layer_tests/tensorflow_tests
pytest test_tf_Reshape.py

Example Pull Requests

Resources

Contact points

Ticket

No response

dyogaharshitha commented 6 months ago

.take

github-actions[bot] commented 6 months ago

Thank you for looking into this issue! Please let us know if you have any questions or require any help.

hub-bla commented 2 weeks ago

.take

github-actions[bot] commented 2 weeks ago

Thank you for looking into this issue! Please let us know if you have any questions or require any help.

hub-bla commented 2 weeks ago

Hi @rkazants, I'm trying to implement this support based on closed PR. After some modifications it almost work. I encountered 2 problems:

Here is the implementation so far:

if (complex_type_mark) {
        element::Type complex_part_type = complex_type_mark->get_complex_part_type();
        input = complex_type_mark->input_value(0);
        // input is complex tensor representation in a form [N1, N2, ..., Nk, 2]
        // where slice [N1, N2, ..., Nk, 0] contains real part of the complex
        // tensor and slice [N1, N2, ..., Nk, 1] contains imaginary part of the
        // complex tensor compute sum of squared real and imaginary parts

        auto gather_index_real = make_shared<v0::Constant>(element::i64, Shape{}, 0);
        auto gather_index_imag = make_shared<v0::Constant>(element::i64, Shape{}, 1);
        auto minus_one = make_shared<v0::Constant>(element::i32, Shape{1}, -1);

        // complex_number: z = a + ib ; real_part=a ; imag_part = b
        auto real_part = make_shared<v8::Gather>(input, gather_index_real, minus_one);
        auto imag_part = make_shared<v8::Gather>(input, gather_index_imag, minus_one);

        auto const_half = create_same_type_const_scalar<float>(real_part, 0.5f);
        auto const_two = create_same_type_const_scalar<float>(real_part, 2.0f);
        auto const_zero = create_same_type_const_scalar<float>(real_part, 0.0f);
        auto const_one = create_same_type_const_scalar<float>(real_part, 1.0f);
        auto const_minus_one = create_same_type_const_scalar<float>(real_part, -1.0f);
        // a^2 + b^2
        auto sum_sq = make_shared<v1::Add>(
                make_shared<v1::Power>(real_part, const_two),
                make_shared<v1::Power>(imag_part, const_two)
        );
        // |z| = sqrt(a^2 + b^2)
        auto norm = make_shared<v1::Power>(sum_sq,
                                           const_half);

        // new_real = sqrt( a + sqrt( a^2 + b^2) / 2 )
        auto new_real =
                make_shared<v1::Power>(make_shared<v1::Divide>(make_shared<v1::Add>(real_part, norm), const_two), const_half);

        // new_img = b/|b| * sqrt( -a + sqrt(a^2 + b^2) / 2 )
        auto is_imag_neg = make_shared<v1::Less>(imag_part, const_zero);
        auto sign = make_shared<v1::Select>(is_imag_neg, const_minus_one, const_one);

        auto new_imag = make_shared<v1::Multiply>(sign,make_shared<v1::Power>(
                make_shared<v1::Divide>(make_shared<v1::Add>(make_shared<v0::Negative>(real_part), norm), const_two),
                const_half));
        // new_real = sqrt_real/(sqrt_real^2 + sqrt_imag^2)
        // new_imag = - sqrt_imag/(sqrt_real^2 + sqrt_imag^2)
        auto new_sum_sq = make_shared<v1::Add>(
                make_shared<v1::Power>(new_real, const_two),
                make_shared<v1::Power>(new_imag, const_two)
        );

        auto rsqrt_real = make_shared<v1::Divide>(new_real, new_sum_sq);

        auto rsqrt_imag = make_shared<v0::Negative>(make_shared<v1::Divide>(new_imag, new_sum_sq));

        auto real_unsqueeze = make_shared<v0::Unsqueeze>(rsqrt_real, minus_one);
        auto imag_unsqueeze = make_shared<v0::Unsqueeze>(rsqrt_imag, minus_one);

        auto concat_result = make_shared<v0::Concat>(OutputVector{real_unsqueeze, imag_unsqueeze}, -1);
        set_node_name(node.get_name(), concat_result);

        auto complex_result = make_shared<ComplexTypeMark>(concat_result, complex_part_type);
        return {complex_result};
    }

Could you provide some hint?

Thanks