This repo was forked and modified from hollance/YOLO-CoreML-MPSNNGraph. Some changes I made:
YOLO is an object detection network. It can detect multiple objects in an image and puts bounding boxes around these objects. Read hollance's blog post about YOLO to learn more about how it works.
In this repo you'll find:
To run the app:
The reported "elapsed" time is how long it takes the YOLO neural net to process a single image. The FPS is the actual throughput achieved by the app.
NOTE: Running these kinds of neural networks eats up a lot of battery power. The app can put a limit on the number of times per second it runs the neural net. You can change this in
setUpCamera()
by changing the linevideoCapture.fps = 50
to a smaller number.
NOTE: You don't need to convert the models yourself. Everything you need to run the demo apps is included in the Xcode projects already.
The model is converted from Keras h5 model, follow the Quick Start guide keras-yolo3 to get YOLOv3 Keras h5 model, then use coreml.py to convert h5 model to CoreML model.