intel / inference-engine-node

Bringing the hardware accelerated deep learning inference to Node.js and Electron.js apps.
Apache License 2.0
33 stars 8 forks source link

[Example] use preprocess to optimize hello_classification_node example #24

Closed huningxin closed 3 years ago

huningxin commented 4 years ago

I'll try to use PreProcess interface to optimize the image resizing and color conversion of node.js image classification example.

huningxin commented 4 years ago

Current PreProcessInfo interface supports the image resizing and color conversion. However, to run MobileNets, the PreProcessInfo lacks support of PreProcessChannel support. I filed an issue about that #26 .

artyomtugaryov commented 3 years ago

@lionkunonly @huningxin do you have evidence that Preprocesschannel works? I create PR #43 using this API, but the results with and without mean and std are the same. But I run the sample from muster with parameters --mean [10,10,10] --std [10,20,40] and got output:

    5         0.188252       electric ray, crampfish, numbfish, torpedo
    850       0.075777       teddy, teddy bear
    904       0.033831       window screen
    6         0.030697       stingray
    552       0.022188       feather boa, boa

without these parameters output is:

387       0.999207       lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens
277       0.000159       red fox, Vulpes vulpes
294       0.000155       brown bear, bruin, Ursus arctos
278       0.000130       kit fox, Vulpes macrotis
298       0.000057       mongoose
huningxin commented 3 years ago

@lionkunonly , could you please help look into this issue?

lionkunonly commented 3 years ago

It is verified by the unit test by mocha. But I do not verify it in an example. I will check it soon.

lionkunonly commented 3 years ago

@artyomtugaryov Hi, I have retested the unit test for the Preprocesschannel again, and I think it is fine. Then I also try to use it in the classification sample as what you try. The result of my experiment shows that the Preprocesschaneel works.

Here is the code: #44

I added using PreProcessingChannel API to the hello_classification_node in this PR.Please check the file example/hello_classification_node/main.js. OpenVINO version is 2021.1.

With the command: node main.js -m ../../models/squeezenet1.1/FP16/squeezenet1.1.xml -i test.png -d CPU -n 10, the result is:

id of classprobability    label
-------   -------        -------
387       0.998859       lesser panda, red panda, panda, bear cat, cat bear, Ailurus fulgens
294       0.000253       brown bear, bruin, Ursus arctos
277       0.000243       red fox, Vulpes vulpes
278       0.000180       kit fox, Vulpes macrotis
298       0.000084       mongoose

With the command: node main.js -m ../../models/squeezenet1.1/FP16/squeezenet1.1.xml -i test.png --mean [10,10,10] --std [10,20,40] -d CPU -n 10, the result is:

id of classprobability    label
-------   -------        -------
5         0.230962       electric ray, crampfish, numbfish, torpedo
850       0.061423       teddy, teddy bear
904       0.035352       window screen
6         0.030383       stingray
78        0.026189       tick
artyomtugaryov commented 3 years ago

@lionkunonly You didn't remove using mean and std in processing the image in the sample. See the comment

lionkunonly commented 3 years ago

@artyomtugaryov Sorry for the mistake. I will recheck it.

lionkunonly commented 3 years ago

@artyomtugaryov I have a problem about the usage of the PreprocessChannel. In the classification sample, we get the Blob of input by infer_req.getBlob(input_info.name());, and fill the memory. I do not know when the PreprocessChannel can perform. In the current implementation, the PreprocessChannel class can help users modify the private variable _preProcessInfo in the class InputInfo. But how to utilize the _preProcessInfo to preprocess the InputInfo Blob data automatically is not implemented.

In my survey, it seems that we need to utilize the SetBlob() in class InferRequest to utilize the info in PreprocessChannel. But this API is not implemented now.

artyomtugaryov commented 3 years ago

I will create a simple network without any effects for input data and test means and std with this network

lionkunonly commented 3 years ago

@artyomtugaryov Do you find any new facts? And do you think the Preprocesschannel works? If you do not think the Preprocesschannel work, you can tell me your requests and I will try to implement it.

artyomtugaryov commented 3 years ago

I didn't find any new facts. But I suggest to implement some tests for PreProcessChannel to test mean and std. You can find tests like this by the following link: https://github.com/openvinotoolkit/openvino/blob/2495eaf56fa0a5f431e5845a36ed6c419de37bd2/inference-engine/tests/functional/plugin/shared/include/behavior/set_preprocess.hpp#L132

I prepared a simple network: input->relu->output. This network does not affect inputs, so we can set PreProcessChannel and check that it works or not. The network is the only XML:

<?xml version="1.0"?>
<net name="mynet" version="10">
    <layers>
        <layer id="0" name="Parameter_0" type="Parameter" version="opset1">
            <data shape="1,3,3,3" element_type="f32" />
            <output>
                <port id="0" precision="FP32">
                    <dim>1</dim>
                    <dim>3</dim>
                    <dim>3</dim>
                    <dim>3</dim>
                </port>
            </output>
        </layer>
        <layer id="1" name="Relu_1" type="ReLU" version="opset1">
            <input>
                <port id="0">
                    <dim>1</dim>
                    <dim>3</dim>
                    <dim>3</dim>
                    <dim>3</dim>
                </port>
            </input>
            <output>
                <port id="1" precision="FP32">
                    <dim>1</dim>
                    <dim>3</dim>
                    <dim>3</dim>
                    <dim>3</dim>
                </port>
            </output>
        </layer>
        <layer id="2" name="Result_2" type="Result" version="opset1">
            <input>
                <port id="0">
                    <dim>1</dim>
                    <dim>3</dim>
                    <dim>3</dim>
                    <dim>3</dim>
                </port>
            </input>
        </layer>
    </layers>
    <edges>
        <edge from-layer="0" from-port="0" to-layer="1" to-port="0" />
        <edge from-layer="1" from-port="1" to-layer="2" to-port="0" />
    </edges>
</net>
artyomtugaryov commented 3 years ago

Inference Engine has a bug for stdScale got GPU and CPU devices but meanValue works fine. I created a PR with a new API, please check #47

huningxin commented 3 years ago

With #47 merged, I think we can close this issue. @artyomtugaryov ?

artyomtugaryov commented 3 years ago

@huningxin sure