openvinotoolkit / openvino

OpenVINO™ is an open-source toolkit for optimizing and deploying AI inference
https://docs.openvino.ai
Apache License 2.0
7.38k stars 2.31k forks source link

why is the AVAILABLE_DEVICES result empty when I run hello_query_device.exe in my laptop? #9535

Closed SummerZ2020 closed 2 years ago

SummerZ2020 commented 2 years ago
System information (version)

xxxxx\Intel\OpenVINO\inference_engine_cpp_samples_build\intel64\Release>hello_query_device.exe Loading Inference Engine [E:] [BSL] found 0 ioexpander device Available devices: CPU SUPPORTED_METRICS: AVAILABLE_DEVICES : [ ] FULL_DEVICE_NAME : Intel(R) Core(TM) i7-6820HQ CPU @ 2.70GHz OPTIMIZATION_CAPABILITIES : [ FP32 FP16 INT8 BIN ] RANGE_FOR_ASYNC_INFER_REQUESTS : { 1, 1, 1 } RANGE_FOR_STREAMS : { 1, 8 } SUPPORTED_CONFIG_KEYS (default values): CPU_BIND_THREAD : NUMA CPU_THREADS_NUM : 0 CPU_THROUGHPUT_STREAMS : 1 DUMP_EXEC_GRAPH_AS_DOT : "" DYN_BATCH_ENABLED : NO DYN_BATCH_LIMIT : 0 ENFORCE_BF16 : NO EXCLUSIVE_ASYNC_REQUESTS : NO PERF_COUNT : NO

GNA SUPPORTED_METRICS: AVAILABLE_DEVICES : [ GNA_SW ] OPTIMAL_NUMBER_OF_INFER_REQU

So, how can I get the correct result of GetMetric function with AVAILABLE_DEVICES key? Any help would be appreciated.

SummerZ2020 commented 2 years ago

@Munesh-Intel could you help me get the correct result of AVAILABLE_DEVICES key from GetMetric function? thanks in advance.

SummerZ2020 commented 2 years ago

@Iffa-Meah could you help me get the correct result of AVAILABLE_DEVICES key from GetMetric function? Thanks in advance.

brmarkus commented 2 years ago

With a quick test on MS-Win10, using OpenVINO in version 2021.4.752 on my Laptop using the Python version of "hello_query_device.py", I get the following console output (excerpt, showing the CPU only):

C:\Program Files (x86)\IntelSWTools\openvino_2021.4.752\deployment_tools\inference_engine\samples\python\hello_query_device>python hello_query_device.py [ INFO ] Creating Inference Engine [ INFO ] Available devices: [E:] [BSL] found 0 ioexpander device [ INFO ] CPU : [ INFO ] SUPPORTED_METRICS: [ INFO ] AVAILABLE_DEVICES: [ INFO ] FULL_DEVICE_NAME: Intel(R) Core(TM) i7-8665U CPU @ 1.90GHz [ INFO ] OPTIMIZATION_CAPABILITIES: FP32, FP16, INT8, BIN [ INFO ] RANGE_FOR_ASYNC_INFER_REQUESTS: 1, 1, 1 [ INFO ] RANGE_FOR_STREAMS: 1, 8 [ INFO ] [ INFO ] SUPPORTED_CONFIG_KEYS (default values): [ INFO ] CPU_BIND_THREAD: NUMA [ INFO ] CPU_THREADS_NUM: 0 [ INFO ] CPU_THROUGHPUT_STREAMS: 1 [ INFO ] DUMP_EXEC_GRAPH_AS_DOT: [ INFO ] DYN_BATCH_ENABLED: NO [ INFO ] DYN_BATCH_LIMIT: 0 [ INFO ] ENFORCE_BF16: NO [ INFO ] EXCLUSIVE_ASYNC_REQUESTS: NO [ INFO ] PERF_COUNT: NO [...]

The device-name to use in this case is just "CPU" (like -d CPU, like -d MULTI:CPU,GPU); there is only one CPU detected as it's not a multi-CPU, not a multi-socket-CPU in my case.

SummerZ2020 commented 2 years ago

With a quick test on MS-Win10, using OpenVINO in version 2021.4.752 on my Laptop using the Python version of "hello_query_device.py", I get the following console output (excerpt, showing the CPU only):

C:\Program Files (x86)\IntelSWTools\openvino_2021.4.752\deployment_tools\inference_engine\samples\python\hello_query_device>python hello_query_device.py [ INFO ] Creating Inference Engine [ INFO ] Available devices: [E:] [BSL] found 0 ioexpander device [ INFO ] CPU : [ INFO ] SUPPORTED_METRICS: [ INFO ] AVAILABLE_DEVICES: [ INFO ] FULL_DEVICE_NAME: Intel(R) Core(TM) i7-8665U CPU @ 1.90GHz [ INFO ] OPTIMIZATION_CAPABILITIES: FP32, FP16, INT8, BIN [ INFO ] RANGE_FOR_ASYNC_INFER_REQUESTS: 1, 1, 1 [ INFO ] RANGE_FOR_STREAMS: 1, 8 [ INFO ] [ INFO ] SUPPORTED_CONFIG_KEYS (default values): [ INFO ] CPU_BIND_THREAD: NUMA [ INFO ] CPU_THREADS_NUM: 0 [ INFO ] CPU_THROUGHPUT_STREAMS: 1 [ INFO ] DUMP_EXEC_GRAPH_AS_DOT: [ INFO ] DYN_BATCH_ENABLED: NO [ INFO ] DYN_BATCH_LIMIT: 0 [ INFO ] ENFORCE_BF16: NO [ INFO ] EXCLUSIVE_ASYNC_REQUESTS: NO [ INFO ] PERF_COUNT: NO [...]

The device-name to use in this case is just "CPU" (like -d CPU, like -d MULTI:CPU,GPU); there is only one CPU detected as it's not a multi-CPU, not a multi-socket-CPU in my case.

Available devices is empty too. Thanks for your test.

SummerZ2020 commented 2 years ago

Here is my test in the computer with 2 CPUs. I can get only one cpu info from GetMetric api, and also no result in AVAILABLE_DEVICES. How can I get the detailed info of the two CPUs, so I can input them to LoadNetWork function. Any hint would be appreciated.

Loading Inference Engine
Available devices:
CPU
        SUPPORTED_METRICS:
                AVAILABLE_DEVICES : [  ]
                FULL_DEVICE_NAME : Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
                OPTIMIZATION_CAPABILITIES : [ FP32 FP16 INT8 BIN ]
                RANGE_FOR_ASYNC_INFER_REQUESTS : { 1, 1, 1 }
                RANGE_FOR_STREAMS : { 1, 32 }
        SUPPORTED_CONFIG_KEYS (default values):
                CPU_BIND_THREAD : NUMA
                CPU_THREADS_NUM : 0
                CPU_THROUGHPUT_STREAMS : 1
                DUMP_EXEC_GRAPH_AS_DOT : ""
                DYN_BATCH_ENABLED : NO
                DYN_BATCH_LIMIT : 0
                ENFORCE_BF16 : NO
                EXCLUSIVE_ASYNC_REQUESTS : NO
                PERF_COUNT : NO

wmic:root\cli>cpu list full result

AddressWidth=64
Architecture=9
Availability=3
Caption=Intel64 Family 6 Model 79 Stepping 1
ConfigManagerErrorCode=
ConfigManagerUserConfig=
CpuStatus=1
CreationClassName=Win32_Processor
CurrentClockSpeed=2101
CurrentVoltage=7
DataWidth=64
Description=Intel64 Family 6 Model 79 Stepping 1
DeviceID=CPU0
ErrorCleared=
ErrorDescription=
ExtClock=100
Family=179
InstallDate=
L2CacheSize=2048
L2CacheSpeed=
LastErrorCode=
Level=6
LoadPercentage=20
Manufacturer=GenuineIntel
MaxClockSpeed=2101
Name=Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
OtherFamilyDescription=
PNPDeviceID=
PowerManagementCapabilities=
PowerManagementSupported=FALSE
ProcessorId=BFEBFBFF000406F1
ProcessorType=3
Revision=20225
Role=CPU
SocketDesignation=CPU0
Status=OK
StatusInfo=3
Stepping=
SystemCreationClassName=Win32_ComputerSystem
SystemName=xxx
UniqueId=
UpgradeMethod=43
Version=
VoltageCaps=

AddressWidth=64
Architecture=9
Availability=3
Caption=Intel64 Family 6 Model 79 Stepping 1
ConfigManagerErrorCode=
ConfigManagerUserConfig=
CpuStatus=1
CreationClassName=Win32_Processor
CurrentClockSpeed=2101
CurrentVoltage=7
DataWidth=64
Description=Intel64 Family 6 Model 79 Stepping 1
DeviceID=CPU1
ErrorCleared=
ErrorDescription=
ExtClock=100
Family=179
InstallDate=
L2CacheSize=2048
L2CacheSpeed=
LastErrorCode=
Level=6
LoadPercentage=4
Manufacturer=GenuineIntel
MaxClockSpeed=2101
Name=Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
OtherFamilyDescription=
PNPDeviceID=
PowerManagementCapabilities=
PowerManagementSupported=FALSE
ProcessorId=BFEBFBFF000406F1
ProcessorType=3
Revision=20225
Role=CPU
SocketDesignation=CPU1
Status=OK
StatusInfo=3
Stepping=
SystemCreationClassName=Win32_ComputerSystem
SystemName=xxx
UniqueId=
UpgradeMethod=43
Version=
VoltageCaps=
Iffa-Intel commented 2 years ago

@SummerZ2020 if your system has 2 working CPUs (2 CPU hardware, probably you are using some external CPU?) the hello_query_device.py should return both of the CPU names.

For example, when 2 working NCS2 are used simultaneously the python script return 2 names:

[ INFO ] MYRIAD : [ INFO ] SUPPORTED_METRICS: [ INFO ] AVAILABLE_DEVICES: 1.2-ma2480 [ INFO ] FULL_DEVICE_NAME: Intel Movidius Myriad X VPU [ INFO ] DEVICE_THERMAL: UNSUPPORTED TYPE [ INFO ] OPTIMIZATION_CAPABILITIES: FP16 [ INFO ] DEVICE_ARCHITECTURE: MYRIAD [ INFO ] IMPORT_EXPORT_SUPPORT: True

[ INFO ] MYRIAD : [ INFO ] SUPPORTED_METRICS: [ INFO ] AVAILABLE_DEVICES: 2.2-ma2480 [ INFO ] FULL_DEVICE_NAME: Intel Movidius Myriad X VPU [ INFO ] DEVICE_THERMAL: UNSUPPORTED TYPE [ INFO ] OPTIMIZATION_CAPABILITIES: FP16 [ INFO ] DEVICE_ARCHITECTURE: MYRIAD [ INFO ] IMPORT_EXPORT_SUPPORT: True

Another thing to consider is the plugin of the devices. However, since we can see your CPU Name=Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz, the plugin should be fine.

If you are doing some programming through a serial communication channel (USB) which means you are connecting probably an MCU to your Desktop/Laptop, you are actually accessing the CPU of the MCU instead of both device CPUs. I hope you are clear about this.

SummerZ2020 commented 2 years ago

If you are doing some programming through a serial communication channel (USB) which means you are connecting probably an MCU to your Desktop/Laptop, you are actually accessing the CPU of the MCU instead of both device CPUs. I hope you are clear about this.

@Iffa-Meah Thanks for your reply. Actually, no MCU on my laptop, my program is working on the CPU directly. I have three questions now.

  1. why is the AVAILABLE_DEVICES result of CPU empty, both on the laptop with one CPU or two.
  2. why there is only one CPU result in the hello_query_device output?
  3. Is there any other method to make OPENVINO can work on two CPU instances? Any help would be appreciated, the earlier the better.
Iffa-Intel commented 2 years ago

@SummerZ2020

  1. AVAILABLE_DEVICES should come with the device name. This means the plugin is properly installed and the device works. For example I had installed CPU plugin and I have a working CPU hardware:
    [ INFO ] CPU :
    [ INFO ]        SUPPORTED_METRICS:
    [ INFO ]                AVAILABLE_DEVICES:
    [ INFO ]                FULL_DEVICE_NAME: Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
    [ INFO ]                OPTIMIZATION_CAPABILITIES: FP32, FP16, INT8, BIN
    [ INFO ]                RANGE_FOR_ASYNC_INFER_REQUESTS: 1, 1, 1
    [ INFO ]                RANGE_FOR_STREAMS: 1, 8

If no proper plugin is installed and the hardware itself is not working, then none would appear.

  1. May I know how did you manage to have two CPUs hardware in your system? All laptop/desktop etc usually has only one CPU.

  2. To answer this, I need to know the answer to question number 2 above.

alalek commented 2 years ago

AFAIK, there is no "devices" subdivision support in CPU plugins. CPU plugin utilizes all available CPU resources. Number of used cores/threads may be configured through CPU_THREADS_NUM parameter.

On Linux taskset utility may help to configure CPUs utilization of application. Windows may have the similar external tool.

/cc @dmitry-gorokhov

SummerZ2020 commented 2 years ago

AFAIK, there is no "devices" subdivision support in CPU plugins. CPU plugin utilizes all available CPU resources. Number of used cores/threads may be configured through CPU_THREADS_NUM parameter.

On Linux taskset utility may help to configure CPUs utilization of application. Windows may have the similar external tool.

/cc @dmitry-gorokhov

@alalek Thanks for your reply. Let me describe more detailed info about my inference. At first, I have implemented the inference with OpenVINO in one CPU as normal in my laptop. All the cores has been utilized when the inference is running with OpenVINO. And then, I have tried to infer on the computer with two CPUs. Only half 20 cores have been utilized(40 cores totally in 2 CPUs) when the inference is done with OpenVINO. And all the 40 cores have been utilized when the inference is done with tensorflow. For the high CPU utilization, I tried to search is there any solution can infer with OpenVINO on 2 CPUs. And I found here that multi devices or multi instances is supported. And I have tried according to this Myriad example below(copied from here):

Beyond the trivial “CPU”, “GPU”, “HDDL” and so on, when multiple instances of a device are available the names are more qualified. For example, this is how two Intel® Movidius™ Myriad™ X sticks are listed with the hello_query_sample:

...
    Device: MYRIAD.1.2-ma2480
...
    Device: MYRIAD.1.4-ma2480

So the explicit configuration to use both would be “MULTI:MYRIAD.1.2-ma2480,MYRIAD.1.4-ma2480”. Accordingly, the code that loops over all available devices of “MYRIAD” type only is below:

InferenceEngine::Core ie;
auto cnnNetwork = ie.ReadNetwork("sample.xml");
std::string allDevices = "MULTI:";
std::vector<std::string> myriadDevices = ie.GetMetric("MYRIAD", METRIC_KEY(AVAILABLE_DEVICES));
for (size_t i = 0; i < myriadDevices.size(); ++i) {
    allDevices += std::string("MYRIAD.")
                            + myriadDevices[i]
                            + std::string(i < (myriadDevices.size() -1) ? "," : "");
}
InferenceEngine::ExecutableNetwork exeNetwork = ie.LoadNetwork(cnnNetwork, allDevices, {});

Unfortunatly, the second CPU cannot be found with GetMetric, even with GetAvailableDevices API.
So , if the OpenVINO can work with 2 CPUs. and how to implement it? Thanks a lot. Any help would be appreciated. @dmitry-gorokhov , could you provide some hints? Thanks in advance.

SummerZ2020 commented 2 years ago

@SummerZ2020

  1. AVAILABLE_DEVICES should come with the device name. This means the plugin is properly installed and the device works. For example I had installed CPU plugin and I have a working CPU hardware:
[ INFO ] CPU :
[ INFO ]        SUPPORTED_METRICS:
[ INFO ]                AVAILABLE_DEVICES:
[ INFO ]                FULL_DEVICE_NAME: Intel(R) Core(TM) i7-8650U CPU @ 1.90GHz
[ INFO ]                OPTIMIZATION_CAPABILITIES: FP32, FP16, INT8, BIN
[ INFO ]                RANGE_FOR_ASYNC_INFER_REQUESTS: 1, 1, 1
[ INFO ]                RANGE_FOR_STREAMS: 1, 8

Meanwhile, this is my GPU. The hardware works but I didn't install the proper plugin for it:

[ INFO ] GPU :
[ INFO ]        SUPPORTED_METRICS:
[ INFO ]                AVAILABLE_DEVICES: 0
[ INFO ]                FULL_DEVICE_NAME: Intel(R) UHD Graphics 620 (iGPU)
[ INFO ]                OPTIMIZATION_CAPABILITIES: FP32, BIN, BATCHED_BLOB, FP16
[ INFO ]                RANGE_FOR_ASYNC_INFER_REQUESTS: 1, 2, 1
[ INFO ]                RANGE_FOR_STREAMS: 1, 2
[ INFO ]                DEVICE_TYPE: integrated
[ INFO ]                DEVICE_GOPS: {'FP32': 441.5999755859375, 'FP16': 883.199951171875, 'U8': 441.5999755859375, 'I8': 441.5999755859375}

If no proper plugin is installed and the hardware itself is not working, then none would appear.

  1. May I know how did you manage to have two CPUs hardware in your system? All laptop/desktop etc usually has only one CPU.
  2. To answer this, I need to know the answer to question number 2 above.

Thanks for your detailed answer.

  1. what is the CPU plugin? How to check if it is installed or not?
  2. Actually, this computer is provided by Hp, so I don't know how to manage to have 2 CPUs in one system?
brmarkus commented 2 years ago

When "hello_query_device" prints "[ INFO ] CPU :" then this means the "CPU plugin" has been loaded and it was able to find "CPU resources" successfully. If "hello_query_device" prints "[ INFO ] GPU :" then the "GPU plugin" has been loaded successfully as well and it was able to find at least one "Intel GPU" (by using required packages&drivers).

SummerZ2020 commented 2 years ago

When "hello_query_device" prints "[ INFO ] CPU :" then this means the "CPU plugin" has been loaded and it was able to find "CPU resources" successfully. If "hello_query_device" prints "[ INFO ] GPU :" then the "GPU plugin" has been loaded successfully as well and it was able to find at least one "Intel GPU" (by using required packages&drivers).

Thanks a lot. I don't need to consider CPU plugin now.

brmarkus commented 2 years ago

Usually the OpenVINO/Open-Model-Zoo samples and demo use the CPU for inference by default, i.e. usually the CPU-plugin is used by default, so the CPU-plugin is used by default. I use a multi-socket CPU system as well and they get used together "as one big CPU". Do you see both CPUs and all the total CPU-cores with "System Resource Load" viewers like "top", "htop", "Task Manager"?

SummerZ2020 commented 2 years ago

U-cores with "System Resource Load" viewers like "top

yes, Here is the "Task Manager" screenshot. This is OpenVINO inference, only 20 cores utilized. openvino This is Tensorflow inference, all 40 cores utilized. tensorflow

brmarkus commented 2 years ago

Are you sure doing inference with OpenVINO on your dual-socket system needs all cores of both CPUs? What, if you start your application twice in parallel? Will still only the first 20 cores be used (fully used, 100%, where in your screenshots there are occupied up to 80-90%)?

Can you try to find out, what value "CPU_THREADS_NUM" currently has, as @alalek mentioned above? Can you change the value, e.g. set it to 40 and run one instance of your application?

alalek commented 2 years ago

@SummerZ2020 Thank you for updates!

Intel® Xeon® Silver 4114 Processor has 10 cores. 2 sockets have 20 cores (40 threads with HT).

There is documentation about KEY_CPU_BIND_THREAD option which has a note about HT:

Binds inference threads to CPU cores. 'YES' (default) binding option maps threads to cores - this works best for static/synthetic scenarios like benchmarks. The 'NUMA' binding is more relaxed, binding inference threads only to NUMA nodes, leaving further scheduling to specific cores to the OS. This option might perform better in the real-life/contended scenarios. Note that for the latency-oriented cases (single execution stream, see below) both YES and NUMA options limit number of inference threads to the number of hardware cores (ignoring hyper-threading) on the multi-socket machines.

SummerZ2020 commented 2 years ago

Thanks @brmarkus . Your precious experience would be very valuable for my task. Here are my answers: Are you sure doing inference with OpenVINO on your dual-socket system needs all cores of both CPUs? Ans: sure, the application performance is still not up to standard. It is the faster the better. What, if you start your application twice in parallel? Will still only the first 20 cores be used (fully used, 100%, where in your screenshots there are occupied up to 80-90%)? Ans:I have not try this yet as the application occupy too much resource. Can you try to find out, what value "CPU_THREADS_NUM" currently has, as @alalek mentioned above? Ans: I found in the hello_query_device result, this "CPU_THREADS_NUM" value is 0. I have not set this value in the code before. And I have tried GetConfig api in my code, it is 0 too. After CPU_THREADS_NUM setting(SetConfig), the setting value can be got. For example, the "CPU_THREADS_NUM" value is set to 1, the return value of GetConfig("CPU","CPU_THREADS_NUM") is 1. Can you change the value, e.g. set it to 40 and run one instance of your application? Ans: I have tried to set it to 40, and still utilized 20 cores . But when I tried to set it to 1, it really only occupied 1 core when running. Does it mean the OpenVINO only get one CPU not two? Here is the code I used:

` InferenceEngine::Core ie; std::vector devices = ie.GetAvailableDevices(); //int cpu_thd_num = ie.GetConfig("CPU","CPU_THREADS_NUM" ).as(); ie.SetConfig({ { "CPU_THREADS_NUM" , "1" } }, "CPU");

std::string cpuName = ie.GetMetric("CPU", METRIC_KEY(FULL_DEVICE_NAME)).as<std::string>();

std::string xml = "./model.h5.xml";
std::string bin = "./model.h5.bin";
InferenceEngine::CNNNetwork network = ie.ReadNetwork(xml, bin);
InferenceEngine::InputsDataMap inputs = network.getInputsInfo();
InferenceEngine::OutputsDataMap outputs = network.getOutputsInfo();

//std::string input_name = "";
for (auto item : inputs) {
    strInput_name = item.first;
    auto input_data = item.second;
    input_data->setPrecision(InferenceEngine::Precision::FP32);
    /*input_data->setLayout(Layout::NCDHW);*/
    //input_data->getPreProcess().setColorFormat(ColorFormat::RAW);
    //std::cout << "input name: " << input_name << std::endl;
}

//std::string output_name = "";
for (auto item : outputs) {
    strOutput_name = item.first;
    auto output_data = item.second;
    output_data->setPrecision(InferenceEngine::Precision::FP32);
    //std::cout << "output name: " << output_name << std::endl;
}

auto executable_network = ie.LoadNetwork(network, "CPU");
//auto executable_network = ie.LoadNetwork(network, allDevices, {});
infer_request = executable_network.CreateInferRequest();`

Looking forward to your help again. Thanks a lot.

SummerZ2020 commented 2 years ago

@SummerZ2020 Thank you for updates!

Intel® Xeon® Silver 4114 Processor has 10 cores. 2 sockets have 20 cores (40 threads with HT).

There is documentation about KEY_CPU_BIND_THREAD option which has a note about HT:

Binds inference threads to CPU cores. 'YES' (default) binding option maps threads to cores - this works best for static/synthetic scenarios like benchmarks. The 'NUMA' binding is more relaxed, binding inference threads only to NUMA nodes, leaving further scheduling to specific cores to the OS. This option might perform better in the real-life/contended scenarios. Note that for the latency-oriented cases (single execution stream, see below) both YES and NUMA options limit number of inference threads to the number of hardware cores (ignoring hyper-threading) on the multi-socket machines.

@alalek Thanks for your info. What I said "core" should be "thread", the program only utilized half threads(20) currently. Sorry for the wrong expression about core and thread. According to the description above("Note that for the latency-oriented cases (single execution stream, see below) both YES and NUMA options limit number of inference threads to the number of hardware cores (ignoring hyper-threading) on the multi-socket machines."), my application can only occupy 20 threads due to the core quantity is 20. But here is the other test I have done with the other 2CPUs PC. The CPU is Silver 4114 too, but hyper-threading is disabled(20 cores and 20 threads in the screenshot). There are only 10 threads are occupied in total 20 threads. image From this picture, we know that only half number of cores are occupied in 2 CPUs, not half number of threads.

@brmarkus How many threads your application occupied? Is it occupied all the threads in 2 CPUs? Is it same with the description above("both YES and NUMA options limit number of inference threads to the number of hardware cores (ignoring hyper-threading) on the multi-socket machines" )? Thanks a lot.

SummerZ2020 commented 2 years ago

I have tried to infer asynchronously,both 2 CPUs are occupied this time. Benchmark_app(%installationPath%Intel\openvino_2021\inference_engine\samples\cpp) is a great tool to test performance before implementation.

Iffa-Intel commented 2 years ago

Closing issue, feel free to re-open or start a new issue if additional assistance is needed.