t-kuha / zynq-library

Various Linux library files for Zynq-7000 series
5 stars 3 forks source link

Deploying quantized Tensorflow lite model using zynq-library on Zynq-7000 #2

Open Afef00 opened 3 years ago

Afef00 commented 3 years ago

Hello, I want to deploy a quantized Tensorflow lite model (inception v4) on Zedboard using ARM compute library, but I don't know how to do it !! Any help please?? Thank you

t-kuha commented 3 years ago

How about this document from ARM?

Afef00 commented 3 years ago

Hi, I tried to use ArmNN by following this document and in the step of building the mobilenetv1_quant_tflite program I get this error

arm-linux-gnueabihf-g++: error: -E or -x required when input is from standard input arm-linux-gnueabihf-g++: error: -E or -x required when input is from standard input Makefile:14: recipe for target 'mobilenetv1_quant_tflite' failed make: *** [mobilenetv1_quant_tflite] Error 1

This is the makefile:

ARMNN_ROOT = /home/lsa/armnn-pi/armnn ARMNN_BUILD = /home/lsa/armnn-pi/armnn/build BOOST_ROOT = /home/lsa/armnn-pi/boost CXX= arm-linux-gnueabihf-g++ CPPFLAGS=-DARMNN_TF_LITE_PARSER -I$(ARMNN_ROOT)/include -I$(ARMNN_ROOT)/src/backends -I$(ARMNN_ROOT)/src/armnnUtils -I$(ARMNN_ROOT)/tests -I$(BOOST_ROOT)/include CFLAGS=-Wall -O3 -std=c++14 -fPIE

LDFLAGS=-pie -L$(ARMNN_BUILD) -L$(ARMNN_BUILD)/tests -L$(BOOST_ROOT)/lib LDLIBS=-larmnn -larmnnTfLiteParser -lboost_system -lboost_filesystem -lboost_program_options

all: mobilenetv1_quant_tflite

mobilenetv1_quant_tflite: mobilenetv1_quant_tflite.cpp inference_test_image.cpp utils.cpp $(CXX) $(CPPFLAGS) $(CFLAGS) $^ -o $@ $(LDFLAGS) $(LDLIBS)

clean: -rm -f mobilenetv1_quant_tflite

Any suggestions please?