Closed marcjasner closed 2 years ago
@alvoron Can someone from your team help take a look? Thanks!
Hi @marcjasner
I noticed you are building OpenVINO from 2021.4 branch. Is it possible to upgrade to the master branch? I just build OpenVINO with Arm Plugin on RPi4 with Ubuntu 20.04 and didn't see any issues when testing benchmark_app using yolo_v3_tiny_tf model.
Could you test with the yolo-v3-tiny-tf model on your current build? Please note you will need to download and convert to IR in a separate system. Here are the instructions to setup a virtual environment to quickly convert the model, I did the following on my Windows system:
python -m venv openvino_env_2021.4.0
openvino_env_2021.4.0\Scripts\activate
python -m pip install --upgrade pip
pip install openvino-dev[tensorflow2]==2021.4.0
omz_downloader --name yolo-v3-tiny-tf
omz_converter --name yolo-v3-tiny-tf
If you still experience the same error message, could you build from master?
Create a virtual environment on a x86 system to download and convert the model.
python -m venv openvino_env_2022.1.0
openvino_env_2022.1.0\Scripts\activate
python -m pip install --upgrade pip
pip install openvino-dev[tensorflow2]==2022.1.0.dev20220302
omz_downloader --name yolo-v3-tiny-tf
omz_converter --name yolo-v3-tiny-tf
Run the following on your RPI to build OpenVINO with Arm Plugin from the master branch.
git clone https://github.com/openvinotoolkit/openvino.git
git clone --recurse-submodules --single-branch --branch=master https://github.com/openvinotoolkit/openvino_contrib.git
cd openvino
git submodule update --init --recursive
chmod +x install_build_dependencies.sh
./install_build_dependencies.sh
mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release -DIE_EXTRA_MODULES=/home/ubuntu/openvino_contrib/modules -DBUILD_java_api=OFF ..
make
cd ../bin/aarch64/Release
cp -r /mnt/usb/yolo-v3-tiny-tf-FP32 .
./benchmark_app -m yolo-v3-tiny-tf-FP32/yolo-v3-tiny-tf.xml -d CPU
[Step 1/11] Parsing and validating input arguments
[ INFO ] Parsing input parameters
[Step 2/11] Loading Inference Engine
[ INFO ] OpenVINO: OpenVINO Runtime version ......... 2022.1.0
[ INFO ] Build ........... custom_master_1bbd92a8f816c3befde78dc1d5aa41645fd0db80
[ INFO ]
[ INFO ] Device info:
[ INFO ] CPU
[ INFO ] openvino_arm_cpu_plugin version ......... 2022.1.0
[ INFO ] Build ........... custom_master_1bbd92a8f816c3befde78dc1d5aa41645fd0db80
[ INFO ]
[ INFO ]
[Step 3/11] Setting device configuration
[ WARNING ] -nstreams default value is determined automatically for CPU device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README.
[Step 4/11] Reading network files
[ INFO ] Loading network files
[ INFO ] Read network took 806.80 ms
[Step 5/11] Resizing network to match image sizes and given batch
[Step 6/11] Configuring input of the model
[ INFO ] Network batch size: 1
Network inputs:
image_input (node: image_input) : u8 / [N,H,W,C]
Network outputs:
conv2d_12/Conv2D/YoloRegion (node: conv2d_12/Conv2D/YoloRegion) : f32 / [...]
conv2d_9/Conv2D/YoloRegion (node: conv2d_9/Conv2D/YoloRegion) : f32 / [...]
[Step 7/11] Loading the model to the device
[ INFO ] Load network took 30.17 ms
[Step 8/11] Setting optimal runtime parameters
[ INFO ] Device: CPU
[ INFO ] { NETWORK_NAME , yolo-v3-tiny-tf }
[ INFO ] { OPTIMAL_NUMBER_OF_INFER_REQUESTS , 4 }
[ INFO ] { NUM_STREAMS , 4 }
[ INFO ] { INFERENCE_NUM_THREADS , 0 }
[Step 9/11] Creating infer requests and preparing input blobs with data
[ WARNING ] No input files were given: all inputs will be filled with random values!
[ INFO ] Test Config 0
[ INFO ] image_input ([N,H,W,C], u8, {1, 416, 416, 3}, static): random (image is expected)
[Step 10/11] Measuring performance (Start inference asynchronously, 4 inference requests, limits: 60000 ms duration)
[ INFO ] BENCHMARK IS IN INFERENCE ONLY MODE.
[ INFO ] Input blobs will be filled once before performance measurements.
[ INFO ] First inference took 2995.30 ms
[Step 11/11] Dumping statistics report
[ INFO ] Count: 164 iterations
[ INFO ] Duration: 62151.71 ms
[ INFO ] Latency:
[ INFO ] Median: 1416.22 ms
[ INFO ] Average: 1501.49 ms
[ INFO ] Min: 1150.11 ms
[ INFO ] Max: 6200.08 ms
[ INFO ] Throughput: 2.64 FPS
Regards, Jesus
Thanks for the reply. I'll give it a try and let you know how it goes. Thanks again
I am unable to run the converter on my Linux box. I get the following output:
$ omz_converter --name yolo-v3-tiny-tf ========== Converting yolo-v3-tiny-tf to IR (FP16) Conversion command: /home/marc/projects/accompaneed/openvino_10776/openvino_env_2021.4.0/bin/python -m mo --framework=tf --data_type=FP16 --output_dir=/home/marc/projects/accompaneed/openvino_10776/public/yolo-v3-tiny-tf/FP16 --model_name=yolo-v3-tiny-tf '--input_shape=[1,416,416,3]' --input=image_input '--scale_values=image_input[255]' --reverse_input_channels --transformations_config=/home/marc/projects/accompaneed/openvino_10776/public/yolo-v3-tiny-tf/yolo-v3-tiny-tf/yolo-v3-tiny-tf.json --input_model=/home/marc/projects/accompaneed/openvino_10776/public/yolo-v3-tiny-tf/yolo-v3-tiny-tf/yolo-v3-tiny-tf.pb
Model Optimizer arguments: Common parameters:
I am unable to build the arm plugin. During build I get the following error:
[ 91%] Building CXX object build-modules/arm_plugin/src/CMakeFiles/openvino_arm_cpu_plugin.dir/transformations/store_result_name.cpp.o
[ 91%] Linking CXX shared module /home/pi/openvino/bin/armv7l/Release/lib/libopenvino_arm_cpu_plugin.so
/usr/bin/ld: ../thirdparty/libarm_compute-static.a(CPPScheduler.o): in function arm_compute::(anonymous namespace)::Thread::Thread(int)': CPPScheduler.cpp:(.text._ZN11arm_compute12_GLOBAL__N_16ThreadC2Ei+0xf8): undefined reference to
pthread_create'
collect2: error: ld returned 1 exit status
make[2]: [build-modules/arm_plugin/src/CMakeFiles/openvino_arm_cpu_plugin.dir/build.make:2001: /home/pi/openvino/bin/armv7l/Release/lib/libopenvino_arm_cpu_plugin.so] Error 1
make[1]: [CMakeFiles/Makefile2:5457: build-modules/arm_plugin/src/CMakeFiles/openvino_arm_cpu_plugin.dir/all] Error 2
make: *** [Makefile:182: all] Error 2
@marcjasner could you please provide the command you are using to build the plugin? If you don't mind to use Docker, could you try the 1st build approach? https://github.com/openvinotoolkit/openvino_contrib/wiki/How-to-build-ARM-CPU-plugin#approach-1-build-opencv-openvino-and-the-plugin-using-pre-configured-dockerfile-cross-compiling-the-preferred-way
I am following jgespino's instructions above. They were:
I'm doing it on a Raspbian Legacy (Buster 32-bit) image.
I will attempt the Docker method you mentioned and let you know how it works out.
UPDATE: Fixed this. Please ignore this comment
This command: docker image build -t arm-plugin -f Dockerfile.RPi32_buster . failed with the following error: Step 15/18 : COPY ../scripts/arm_cpu_plugin_build.sh /arm_cpu_plugin_build.sh COPY failed: forbidden path outside the build context: ../scripts/arm_cpu_plugin_build.sh ()
The Open_model_zoo demos failed to build, killing the build process. I'm running it again now.
Here is the error message: CMake Error at CMakeLists.txt:141 (find_package): By not providing "FindOpenVINO.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "OpenVINO", but CMake did not find one.
Could not find a package configuration file provided by "OpenVINO" with any of the following names:
OpenVINOConfig.cmake
openvino-config.cmake
Add the installation prefix of "OpenVINO" to CMAKE_PREFIX_PATH or set "OpenVINO_DIR" to a directory containing one of the above files. If "OpenVINO" provides a separate development package or SDK, be sure it has been installed.
-- Configuring incomplete, errors occurred! See also "/arm_cpu_plugin/open_model_zoo/build/CMakeFiles/CMakeOutput.log".
@marcjasner I've created and merged PR to fix this compilation issue: https://github.com/openvinotoolkit/openvino_contrib/pull/334
Could you please pull the latest changes and repeat the build procedure?
The same error occured: CMake Error at CMakeLists.txt:141 (find_package): By not providing "FindOpenVINO.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "OpenVINO", but CMake did not find one.
Could not find a package configuration file provided by "OpenVINO" with any of the following names:
OpenVINOConfig.cmake
openvino-config.cmake
Add the installation prefix of "OpenVINO" to CMAKE_PREFIX_PATH or set "OpenVINO_DIR" to a directory containing one of the above files. If "OpenVINO" provides a separate development package or SDK, be sure it has been installed.
-- Configuring incomplete, errors occurred! See also "/arm_cpu_plugin/open_model_zoo/build/CMakeFiles/CMakeOutput.log".
@marcjasner it seems you didn't get the latest changes from contrib repository.
Did you do git pull
for openvino_contrib
repository?
If so, could you please delete openvino_contrib
folder and clone the repository one more time: git clone --recurse-submodules --single-branch --branch=master https://github.com/openvinotoolkit/openvino_contrib.git
The build completed successfully! I will install it to my Pi and see if the arm CPU plugin works and report the results here
Excellent! I was able to follow the initial instructions and get the yolo-v3-tiny-tf model converted and running against the CPU device on my Pi.
Many thanks to all involved in helping me for their patience and assistance!
Excellent! I was able to follow the initial instructions and get the yolo-v3-tiny-tf model converted and running against the CPU device on my Pi.
Many thanks to all involved in helping me for their patience and assistance!
Glad to hear that. Are you going to close the ticket?
Closing it now. Thanks again!
When attempting to use benchmark_app to compare performance between CPU and MYRIAD devices, benchmark_app always fails with "bad cast" error when using CPU device.
I built OpenVINO on an 8GB RPi4 following these instructions: https://www.intel.com/content/www/us/en/support/articles/000057005/boards-and-kits.html
I then built the libarmPlugin.so cpu plugin following these instructions: https://github.com/openvinotoolkit/openvino_contrib/wiki/How-to-build-ARM-CPU-plugin
I then downloaded and updated the open_model_zoo models following these instructions: https://www.intel.com/content/www/us/en/support/articles/000055510/boards-and-kits/neural-compute-sticks.html
When I attempt to run some of the demos on either CPU or MYRIAD device it works fine, but if I attempt to use benchmark_app I get the following error everytime I try and use the CPU device:
~/openvino/bin/armv7l/Release $ ./benchmark_app -i ~/Pictures/test/ -m ~/modelShare/projects/accompaneed/open_model_zoo/models/intel/face-detection-adas-0001/FP16/face-detection-adas-0001.xml -api async -d CPU [Step 1/11] Parsing and validating input arguments [ INFO ] Parsing input parameters [ INFO ] Files were added: 2 [ INFO ] /home/pi/Pictures/test//MomAndDad.jpg [ INFO ] /home/pi/Pictures/test//walk.jpg [Step 2/11] Loading Inference Engine [ INFO ] InferenceEngine: IE version ......... 2021.4.0 Build ........... custom_HEAD_5cee8bbf29797f4544b343e803de957e9f041f92 [ INFO ] Device info: CPU armPlugin version ......... 2021.4.0 Build ........... custom_HEAD_db62079e2fd248eb2b2a121dec43160164cd5f79
[Step 3/11] Setting device configuration [ WARNING ] -nstreams default value is determined automatically for CPU device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README. [Step 4/11] Reading network files [ INFO ] Loading network files [ INFO ] Read network took 322.21 ms [Step 5/11] Resizing network to match image sizes and given batch [ INFO ] Network batch size: 1 [Step 6/11] Configuring input of the model Network inputs: data : U8 / NCHW Network outputs: detection_out : FP32 / NCHW [Step 7/11] Loading the model to the device [ INFO ] Load network took 187.28 ms [Step 8/11] Setting optimal runtime parameters [ ERROR ] std::bad_cast
Can someone explain what causes this and if it's something I'm doing wrong? Thanks