This is the implementation of Object Detection using Tiny YOLO v1 model on Apple's CoreML Framework.
The app fetches image from your camera and perform object detection @ (average) 17.8 FPS.
To use this app, open iOS-CoreML-MNIST.xcodeproj in Xcode 9 and run it on a device with iOS 11. (You can also use simulator)
In this project, I am not training YOLO from scratch but converting the already existing model to CoreML model. If you want to create model on your own.
$ conda create -n coreml python=2.7
$ source activate coreml
(coreml) $ conda install pandas matplotlib jupyter notebook scipy scikit-learn opencv
(coreml) $ pip install tensorflow==1.1
(coreml) $ pip install keras==1.2.2
(coreml) $ pip install h5py
(coreml) $ pip install coremltools
./nnet
directory../nnet
as master directory.
(coreml) $ sudo python convert.py
I also included a jupyter notebook for better understanding the above code. You need to use it with root permissions for mainly converting the keras model to CoreML model. Initialise the jupyter notebook instance with the following command:
(coreml) $ jupyter notebook --allow-root
The converted CoreML model can be downloaded here:
If you are interested in creating the Tiny YOLO v1 model on your own, a step-by-step tutorial is available at - Link
These are the results of the app when tested on iPhone 7.
Sri Raghu Malireddi / @r4ghu