This example gives a demo of loading a Object Detection model to the iOS platform and using it to do the object detection work. The currently supported models are: ssd_mobilenet_v1_coco, ssd_inception_v2_coco, faster_rcnn_resnet101_coco.
First open the terminal, type in the following command:
export TF_ROOT=/your//tensorflow/root/
Then cd to the example folder and check your tensroflow version and the correctness of your tensorflow root path:
bash config.sh
The config.sh file will automatically check your TensorFlow version and copy some files that are necessary for the compile process. After running the config.sh, if the terminal show the following result then you are good for next step:
ok=> current version: # Release 1.4.0
ok=> Ready!
Otherwise, please go to the TensorFlow official website and download the latest version of TensorFlow.
Compile ios dependencies:
cd $TF_ROOT
tensorflow/contrib/makefile/build_all_ios_ssd.sh
Open the project in Xcode Then in the "tf_root.xcconfig" replace the TF_ROOT with your tensorflow root's absolute path. Finally, add the "op_inference_graph.pb" to your project folder.
Note: If you'd like to run other two models, download it from the above links and add the .ph file to your project.
For other model file, please check my another repo.
After updating the TensorFlow to version 1.4.0, I did the following change to make sure the example could run successfully:
cd $TF_ROOT
rm -r tensorflow/contrib/makefile/gen/lib/ tensorflow/contrib/makefile/gen/obj/
tensorflow/contrib/makefile/build_tflib_ssd.sh
$(TF_ROOT)/tensorflow/contrib/makefile/downloads/nsync/public/
$(TF_ROOT)/tensorflow/contrib/makefile/gen/nsync
TF_CC_SRCS += tensorflow/core/platform/default/gpu_tracer.cc
Recently Google released the Tensorflow Object Detection API which includes the selection of multiple models. However, the API does not contain a iOS version of implementation. Therefore, in this example, I wrote a IOS implementation of the object detection API, including the SSDMobilenet model. For this example, it maintains the same functionality as the python version of object detection API. Furthermore, the IOS code is derived from Google tensorflow ios_camera_example.
You’ll need Xcode 7.3 or later.
Download the Google Tensorflow repository to local: https://github.com/tensorflow/tensorflow
If you don't have Bazel, please follow the Bazel's official installation process: https://docs.bazel.build/versions/master/install.html
Download this repository to local and put the directory into the tensorflow directory you just downloaded.
Follow the below instruction to download the model you want: https://github.com/tensorflow/models/blob/master/object_detection/g3doc/detection_model_zoo.md We only need the graph file, aka .pb file (We chose SSDMobilenet as example):
frozen_inference_graph.pb
Then download the label file for the model you chose: https://github.com/tensorflow/models/tree/master/object_detection/data
mscoco_label_map.pbtxt
Before you could run the project, you need to build some bazel depedicies by following the Google instruction: If this is your first time build Bazel, please follow the below link to configure the installation: https://www.tensorflow.org/install/install_sources#configure_the_installation
If you'd like to get the info of graph's input/output name, using the following command:
bazel build tensorflow/tools/graph_transforms:summarize_graph
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph --in_graph=YOUR_GRAPH_PATH/example_graph.pb
The Makefile is under "tensorflow/contrib/makefile/".
One of the biggest issues during iOS Tensorflow building is the missing of different OpKernel. One may get similar errors like below:
Invalid argument: No OpKernel was registered to support Op 'Equal' with these attrs. Registered devices: [CPU], Registered kernels:
<no registered kernels>
In order to solve the problems in one time, we use Bazel to generate a ops_to_register.h, which contains all the needed Ops to loading the certain graph into project. An example of command-line usage is:
bazel build tensorflow/python/tools:print_selective_registration_header
bazel-bin/tensorflow/python/tools/print_selective_registration_header \
--graphs=path/to/graph.pb > ops_to_register.h
This will generate a ops_to_register.h file in the current directory. Copy the file to "tensorflow/core/framework/". Then when compiling tensorflow, pass -DSELECTIVE_REGISTRATION and -DSUPPORT_SELECTIVE_REGISTRATION See tensorflow/core/framework/selective_registration.h for more details.
For different models, you also need to provide certain ops_to_register.h file that fits the model. Therefore, if you'd like to contain several models in one project, you need to first generate a ops_to_register.h for each different model, then merge all the ops_to_register.h into one file. By doing the operation, you could use different models in one project without compiling the Tensorflow lib separately.
In this example, we provided a combined ops_to_register.h file which is compatible with ssd_mobilenet_v1_coco and ssd_inception_v2_coco and faster_rcnn_resnet101_coco.
Instead of using build_all_ios for the building process, we divide the process into several steps:
export MACOSX_DEPLOYMENT_TARGET="10.10"
after
set -x
set -e
tensorflow/contrib/makefile/download_dependencies.sh
tensorflow/contrib/makefile/compile_ios_protobuf.sh
Then create the libtensorflow-core.a:
tensorflow/contrib/makefile/compile_ios_tensorflow.sh "-O3 -DANDROID_TYPES=ANDROID_TYPES_FULL -DSELECTIVE_REGISTRATION -DSUPPORT_SELECTIVE_REGISTRATION"
If you'd like to shorten the building time, you could choose to build the "compile_ios_tensorflow_s.sh" file provided in the repository. The "complie_ios_tensorflow_s.sh" only complie two IOS_ARCH: ARM64 and x86_64, which make the building process much shorter. Make sure to copy the file to the "tensorflow/contrib/makefile/" directory before building. Then the build command is changed to:
tensorflow/contrib/makefile/compile_ios_tensorflow_s.sh "-O3 -DANDROID_TYPES=ANDROID_TYPES_FULL -DSELECTIVE_REGISTRATION -DSUPPORT_SELECTIVE_REGISTRATION"
Make sure the script has generated the following .a files:
tensorflow/contrib/makefile/gen/lib/libtensorflow-core.a
tensorflow/contrib/makefile/gen/protobuf_ios/lib/libprotobuf.a
tensorflow/contrib/makefile/gen/protobuf_ios/lib/libprotobuf-lite.a
Before you run, make sure to recompile the libtensorflow-core.a according to the modified Makefile. Otherwise, following error may be generated during the runtime:
Error adding graph to session:
No OpKernel was registered to support Op 'Less' with these attrs.
Registered devices: [CPU], Registered kernels: device='CPU';
T in [DT_FLOAT]......
Once you finish the above process, you could run the project by click the build button in the Xcode
In order to get the lable name for each detected box, you have to use proto buffer data structure. In the SSDMobilenet model, the label file is stored as a proto buffer structure, so that you need to proto's own function to extract the data.
To use proto buffer, first install it by
brew install protobuf
Then follow https://developers.google.com/protocol-buffers/docs/cpptutorial to compile the proto buffer. After the compiling, you'll get a .h and a .cc files which contain the declaration and implementation of your classes.
example.pb.h
example.pn.cc
Finally you could use the funcition in the files to extract your label data.
If you still get errors like after finishing the above instruction:
Invalid argument: No OpKernel was registered to support Op 'xxx' with these attrs. Registered devices: [CPU], Registered kernels:
<no registered kernels>
Make sure you've added the "-O3 -DANDROID_TYPES=ANDROID_TYPES_FULL -DSELECTIVE_REGISTRATION -DSUPPORT_SELECTIVE_REGISTRATION" when run the "compile_ios_tensorflow_s.sh".
Invalid argument: No OpKernel was registered to support Op 'Conv2D' with these attrs. Registered devices: [CPU], Registered kernels:
[[Node: FeatureExtractor/InceptionV2/InceptionV2/Conv2d_1a_7x7/separable_conv2d = Conv2D[T=DT_FLOAT, data_format="NHWC", padding="VALID", strides=[1, 1, 1, 1], use_cudnn_on_gpu=true](FeatureExtractor/InceptionV2/InceptionV2/Conv2d_1a_7x7/separable_conv2d/depthwise, FeatureExtractor/InceptionV2/Conv2d_1a_7x7/pointwise_weights/read)]]