Closed wb666greene closed 5 years ago
Tensor flow / lite seems to be go through lots of changes. I had similar issue, where by I did a checkout literally two days latter and I could not get it to compile again. Try to checkout the version of 25th of april, that worked for me.
try commit 7051274e6ba1da5eb6c237d981c589c37b382047
I'm not expert at github, but I'm not finding any way to select a "branch" based on either a date or this hash you just posted.
can you give me a clue?
Is this it? https://github.com/tensorflow/tensorflow/tree/7051274e6ba1da5eb6c237d981c589c37b382047 Not sure how I got there but if its right i grab it and try again.
thanks.
try git reset --hard 7051274e6ba1da5eb6c237d981c589c37b382047 in the root of your tensor flow clone
I'm lost in the weeds and stuck.
I could not git clone https://github.com/tensorflow/tensorflow/tree/7051274e6ba1da5eb6c237d981c589c37b382047
But I could "Download Zip" to my desktop, scp it to the Odroid and unzip. Unfortunatley doing the build threw the exact same error :(
Permuting variations of the git reset --hard command in my root dir (where I downloaded the zip file) or in the unzipped directory give the error: fatal: Not a git repository (or any of the parent directories): .git
In each location I tried: git reset --hard 7051274e6ba1da5eb6c237d981c589c37b382047
and git reset --hard tensorflow-7051274e6ba1da5eb6c237d981c589c37b382047
I did get the reset command to apparently work by giving it in the directory of the original git clone directory.
Unfortunately the build again stops with the flatbuffers.h error. :(
Ahh sorry!! I forgot that you need to do this before tools/make/build_rpi_lib.sh: tools/make/download_dependencies.sh
To revert to that specific commit on the xu4 do: git clone https://github.com/tensorflow/tensorflow.git cd tensorflow git reset --hard 7051274e6ba1da5eb6c237d981c589c37b382047
Ah, on other thing I remember it will fail on benchmark tools so you might want top edit tools/make/Makefile and change all like this:
all: $(LIB_PATH) $(MINIMAL_BINARY)
BINGO sort of. It compiled for quite some time and finally stopped with:
collect2: error: ld returned 1 exit status tensorflow/lite/tools/make/Makefile:244: recipe for target '/home/odroid/tensorflow.git/tensorflow/lite/tools/make/gen/rpi_armv7l/bin/benchmark_model' failed make: *** [/home/odroid/tensorflow.git/tensorflow/lite/tools/make/gen/rpi_armv7l/bin/benchmark_model] Error 1
Presumably its trying to build a library your code uses.
no, I don't need the benchmark tools only tools/make/gen/rpi_armv7l/lib/libtensorflow-lite.a static library, see comment above. You need to copy that under the project as lib/libtensorflow-lite_armv7l.a or adjuste the Makefile accordingly - if you don't want to use the lib I already included in this repo
Do you expect that Library to work on Odroid Mate16? Edit: It did appear to create the tools/make/gen/rpi_armv7l/lib/libtensorflow-lite.a so I guess enough of the build completed.
If so, I'll give it a try, but running make dies horribly: ~/codrive/hw_sampling/google_coral$ make Package opencv4 was not found in the pkg-config search path. Perhaps you should add the directory containing `opencv4.pc' to the PKG_CONFIG_PATH environment variable No package 'opencv4' found arm-linux-gnueabihf-g++ -o obj/tpu_obj_detect.o -O3 --std=c++11 -march=armv7-a -mfpu=neon-vfpv4 -funsafe-math-optimizations -ftree-vectorize -fPIC -I. -I/root/kits/tensorflow/ -I./lib/ -I/root/kits/tensorflow//tensorflow/lite/tools/make/downloads/flatbuffers/include/ -c tpu_obj_detect.cpp cc1plus: error: /root/kits/tensorflow/: Permission denied cc1plus: error: /root/kits/tensorflow//tensorflow/lite/tools/make/downloads/flatbuffers/include/: Permission denied Makefile:41: recipe for target 'obj/tpu_obj_detect.o' failed make: *** [obj/tpu_obj_detect.o] Error 1
Looks like my fairly minimal Odroid Mate16 (and Mate18) systems are missing a lot of what you are using. But they do compile and run the OpenVINO C++ Movidius sample code which is what got me here after reporting the Mate18 vs Mate16 performance decrement.
Thanks for your help.
True, I will need to improve the building process You need to edit the Makefile in ~/codrive/hw_sampling/google_coral and change TF_PATH=/root/kits/tensorflow/ to point to your tensorflow path
I've gotten further, but looks like linking is failing:
odroid@XU16:~/codrive/hw_sampling/google_coral$ make arm-linux-gnueabihf-g++ -o tpu_obj_detect obj/tpu_obj_detect.o obj/tpu_worker.o ./lib/libtensorflow-lite_armv7l.a -lpthread -lm -ldl -L/home/odroid/edgetpu_api/libedgetpu/libedgetpu_arm32.so -L/home/odroid/inference_engine_vpu_arm/opencv/lib/libopencv_videoio.so.4.1.0 -L/home/odroid/inference_engine_vpu_arm/opencv/lib/libopencv_imgcodecs.so.4.1.0 -L/home/odroid/inference_engine_vpu_arm/opencv/lib/libopencv_imgproc.so.4.1.0 -L/home/odroid/inference_engine_vpu_arm/opencv/lib/libopencv_core.so.4.1.0 obj/tpu_obj_detect.o: In function
annotateImage(cv::Mat&, std::vector<networkResult, std::allocatorcv::putText(cv::_InputOutputArray const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, cv::Point_<int>, int, double, cv::Scalar_<double>, int, int, bool)' tpu_obj_detect.cpp:(.text+0x1d4): undefined reference to
cv::rectangle(cv::InputOutputArray const&, cv::Rect
I'm trying to use the OpenCV 4.1.0 that was installed as part of OpenVINO
Here is my attempt at editing your Makefile (markdown is not helping!):
Maybe sharing the results of: pkg-config opencv --cflags pkg-config opencv4 --libs
Would give me a clue.
Of course its possible your version of openCV and mine are hopelessly incompatible, in which case I'm done.
You need to link -lopencv_imgproc -lopencv_highgui -lopencv_core -lopencv_videoio -lopencv_imgcodecs
My pkg-config will be different since I compiled opencv with world (I don't recall openvino include it)
pkg-config opencv4 --libs -L/usr/local/lib -lopencv_world
pkg-config opencv4 --cflags -I/usr/local/include/opencv4/opencv -I/usr/local/include/opencv4
If opencv not compatible, entirely possible as well, I didn't check what openvino includes exactly as I usually insist in compiling my own, here is the build/cmake (not you need to change the paths for opencv_contrib)
First get release of opencv_4.1.0 and opencv_contrib-4.1.0 cd opencv-4.1.0/ mkdir build cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D OPENCV_GENERATE_PKGCONFIG=YES -D WITH_V4L=ON -D INSTALL_C_EXAMPLES=ON -DBUILD_opencv_world=ON -D INSTALL_PYTHON_EXAMPLES=OFF -D OPENCV_EXTRA_MODULES_PATH=/root/kits/opencv_contrib-4.1.0/modules -D WITH_OPENCL=ON -D WITH_GTK=ON -D WITH_TBB=ON -D WITH_OPENGL=ON -D WITH_OPENMP=ON -D BUILD_EXAMPLES=ON -D ENABLE_NEON=ON -D OPENCV_ENABLE_NONFREE=ON -D WITH_FFMPEG=ON -D WITH_GTK_2_X=ON -D WITH_LIBV4L=ON -D WITH_XINE=ON -D WITH_GSTREAMER=ON -D WITH_GDAL=ON -D WITH_HALIDE=ON -D ENABLE_PRECOMPILED_HEADERS=OFF -D BUILD_PERF_TESTS=OFF -D BUILD_TESTS=OFF -D ENABLE_CXX11=ON -DEXTRA_C_FLAGS="-O3 -march=native -mcpu=cortex-a15.cortex-a7 -mtune=cortex-a15.cortex-a7 -mfpu=neon-vfpv4 -mfloat-abi=hard -fomit-frame-pointer -ffast-math" -DEXTRA_CXX_FLAGS="-O3 -std=c++11 -march=native -mcpu=cortex-a15.cortex-a7 -mtune=cortex-a15.cortex-a7 -mfpu=neon-vfpv4 -mfloat-abi=hard -fomit-frame-pointer -ffast-math" -DWITH_JPEG=ON ..
make -j5 make install
I'm not in any position to build openCV right now, but I think you gave me enough clues to get the Makefile past the openCV linking. It looks like it compiled using your ./lib/libtensorflow-lite_armv7l.a
Thanks!
FYI here is the Makefile that appears to have worked using the Raspbian OpenVINO openCV (l_openvino_toolkit_raspbi_p_2019.1.094.tgz): Makefile.txt
Code appears to be running:
odroid@XU16:~/codrive/hw_sampling/google_coral$ ./tpu_obj_detect --camera_device=0 open camera: 0 INFO: Initialized TensorFlow Lite runtime. W0520 22:36:10.166466 21990 package_registry.cc:65] Minimum runtime version required by package (5) is lower than expected (10). FPS/I: 11.00 / 67.59 (15ms) : time : 1.00 s :: OBJECT(0.42) :83: book :: (315,412,327,465) FPS/I: 13.00 / 66.36 (15ms) : time : 2.00 s :: OBJECT(0.58) :83: book :: (345,419,355,471) FPS/I: 13.67 / 66.56 (15ms) : time : 3.00 s :: OBJECT(0.42) :81: refrigerator :: (102,200,264,476) FPS/I: 17.75 / 65.76 (15ms) : time : 4.00 s :: OBJECT(0.34) :81: refrigerator :: (110,193,314,475) FPS/I: 20.20 / 70.33 (14ms) : time : 5.00 s :: OBJECT(0.34) :83: book :: (526,153,534,208) FPS/I: 21.83 / 65.98 (15ms) : time : 6.00 s :: OBJECT(0.42) :81: refrigerator :: (100,194,238,475) FPS/I: 23.00 / 67.26 (15ms) : time : 7.00 s :: OBJECT(0.42) :81: refrigerator :: (100,194,238,475) FPS/I: 23.88 / 67.52 (15ms) : time : 8.00 s :: OBJECT(0.42) :83: book :: (345,416,355,471) FPS/I: 24.56 / 67.75 (15ms) : time : 9.00 s :: OBJECT(0.42) :83: book :: (344,417,354,472) FPS/I: 25.10 / 67.20 (15ms) : time : 10.00 s :: OBJECT(0.42) :83: book :: (344,416,355,471) FPS/I: 25.55 / 66.93 (15ms) : time : 11.00 s :: OBJECT(0.42) :83: book :: (318,410,328,466) FPS/I: 25.92 / 67.37 (15ms) : time : 12.00 s :: OBJECT(0.27) :83: book :: (343,416,355,471) FPS/I: 26.23 / 67.38 (15ms) : time : 13.00 s :: OBJECT(0.66) :0: person :: (121,139,393,479) FPS/I: 26.50 / 67.17 (15ms) : time : 14.00 s :: OBJECT(0.73) :0: person :: (154,131,427,477) FPS/I: 26.73 / 66.53 (15ms) : time : 15.00 s :: OBJECT(0.58) :0: person :: (81,93,267,480) FPS/I: 26.94 / 66.44 (15ms) : time : 16.00 s :: OBJECT(0.34) :81: refrigerator :: (104,198,266,474) FPS/I: 27.12 / 68.35 (15ms) : time : 17.00 s :: OBJECT(0.42) :83: book :: (344,416,355,471) FPS/I: 27.28 / 67.36 (15ms) : time : 18.00 s :: OBJECT(0.34) :81: refrigerator :: (106,194,277,479) FPS/I: 27.42 / 66.94 (15ms) : time : 19.00 s :: OBJECT(0.42) :81: refrigerator :: (96,194,249,479) FPS/I: 27.55 / 67.31 (15ms) : time : 20.00 s :: OBJECT(0.42) :83: book :: (345,415,355,471) FPS/I: 27.67 / 66.52 (15ms) : time : 21.00 s :: OBJECT(0.27) :81: refrigerator :: (104,192,275,477) FPS/I: 27.77 / 67.82 (15ms) : time : 22.00 s :: OBJECT(0.42) :83: book :: (345,416,355,471) FPS/I: 27.87 / 67.26 (15ms) : time : 23.00 s :: OBJECT(0.34) :81: refrigerator :: (106,196,277,477) FPS/I: 27.96 / 67.20 (15ms) : time : 24.00 s :: OBJECT(0.34) :83: book :: (343,415,355,471) FPS/I: 28.04 / 67.59 (15ms) : time : 25.00 s :: OBJECT(0.34) :81: refrigerator :: (103,204,234,479) FPS/I: 28.12 / 67.68 (15ms) : time : 26.00 s :: OBJECT(0.42) :83: book :: (344,415,355,471) FPS/I: 28.19 / 67.76 (15ms) : time : 27.00 s :: OBJECT(0.34) :81: refrigerator :: (104,196,238,477) FPS/I: 28.25 / 67.11 (15ms) : time : 28.00 s :: OBJECT(0.42) :83: book :: (345,416,355,471) FPS/I: 28.31 / 66.22 (15ms) : time : 29.00 s :: OBJECT(0.34) :81: refrigerator :: (104,194,278,479) FPS/I: 28.37 / 67.28 (15ms) : time : 30.00 s :: OBJECT(0.42) :83: book :: (315,411,327,467) FPS/I: 28.42 / 66.69 (15ms) : time : 31.00 s :: OBJECT(0.34) :81: refrigerator :: (102,196,238,477) qFPS/I: 28.47 / 69.47 (14ms) : time : 32.00 s :: OBJECT(0.42) :83: book :: (344,417,355,471) FPS/I: 28.52 / 67.95 (15ms) : time : 33.00 s :: OBJECT(0.42) :83: book :: (344,415,355,471) FPS/I: 28.56 / 66.25 (15ms) : time : 34.00 s :: OBJECT(0.27) :83: book :: (525,154,534,209) FPS/I: 28.60 / 66.04 (15ms) : time : 35.00 s :: OBJECT(0.34) :81: refrigerator :: (103,194,280,479) ^C
But I had to stop it with Ctrl-c I'm not sure how it compares to what you report for Ubuntu 18.04:
**odroid xu4 + TPU + Logitech 920 WEBCAM**: 6.1 W - 24.89 Real FPS / 42.90 Inference FPS (23ms) - load average: 0.52
My entire purpose is to get results on Ubuntu 16.04 to compare with your 18.04.
Actually the average you got, 15ms/67 FPS inference time, is slightly better then what I got 17ms, I see you are using the the TPU direct, not throttled mode, so you reference to that figure. Then again I have the NCS2 hanging off the usb port and that might interfere - I have not tried with a clean slate. In any case performance looks very good almost on par with Google coral dev board, where they integrated the TPU in SOC - with that you get 12-13ms inference time. I have to say that the overall performance for ubuntu 18.04 / 14.4 kernel is better then 16, better compatibility with various devices and more stable. In the case of NCS2, the regresion you see might be due to Intel's reluctance to offer proper ARM support, and by that I mean opensource drivers as the original NCSDK - It might as well be the way they compiled the libmyriad.so that works better against 16 then 18.
@larrylart
Thanks for all your help. You've answered my question.
Its been a very long time since I had to hack on a Makefile, I'd have given up without your assistance.
Having followed the OpenVINO versions for almost a year now, the evolution of the install scripts makes it clear they are working on 18.04 support, but none is official yet. I'd guess the performance regression is not intentional or malicious on their part
I'm trying to build this sample code on my XU-4 running Mate 16.04 and Mate 18.04 to help evaluate a potential performance regression in Mate18 vs. Mate16 that I uncovered running an OpenVINO C++ example code for the Movidius NCS2.
I get the same error on both systems. doing: ./tools/make/build_rpi_lib.sh
I get this error: In file included from ./tensorflow/lite/core/api/op_resolver.h:20:0, from ./tensorflow/lite/core/api/flatbuffer_conversions.h:24, from tensorflow/lite/core/api/flatbuffer_conversions.cc:16: ./tensorflow/lite/schema/schema_generated.h:21:37: fatal error: flatbuffers/flatbuffers.h: No such file or directory
I've no idea what package flatbuffers.h belongs to :( but I doubt its the only missing dependency.
The idea is that if the Coral code doesn't have the performance decrement, the problem is likely in OpenVINO as its not "officially" supported on 18.04 at present.
Python samples using Coral and Movidius show differences within the run to run repeat variance of the code, although Mate16 is on average ~0.5 fps higher, although I won't claim statistical significance.
Gist of the performance decrement: There appears to be a performance regression where Mate18 is significantly worse than the Mate16 system, remember this is C++ code, not Python. I get the following results from the sample code: Odroid XU-4 Mate16 NCS: 8.22 fps NCS2: 11.5 fps
Looks to be performance regression on Mate18 vs. Mate16! Odroid XU-4 Mate18 NCS: 6.56 fps NCS2: 8.36 fps
Raspberry Pi3B: NCS: 6.93 fps NCS2: 8.58 fps
Basically Mate18 is a bit worse than a Pi3B here!