A Flutter plugin for integrating Ultralytics YOLO computer vision models into your mobile apps. The plugin supports both Android and iOS platforms, and provides APIs for object detection and image classification.
Feature | Android | iOS |
---|---|---|
Detection | ✅ | ✅ |
Classification | ✅ | ✅ |
Pose Estimation | ❌ | ❌ |
Segmentation | ❌ | ❌ |
OBB Detection | ❌ | ❌ |
Before proceeding further or reporting new issues, please ensure you read this documentation thoroughly.
Ultralytics YOLO is designed specifically for mobile platforms, targeting iOS and Android apps. The plugin leverages Flutter Platform Channels for communication between the client (app/plugin) and host (platform), ensuring seamless integration and responsiveness. All processing related to Ultralytics YOLO APIs is handled natively using Flutter's native APIs, with the plugin serving as a bridge between your app and Ultralytics YOLO.
Before you can use Ultralytics YOLO in your app, you must export the required models. The exported models are in the form of .tflite
and .mlmodel
files, which you can then include in your app. Use the Ultralytics YOLO CLI to export the models.
IMPORTANT: The parameters in the commands above are mandatory. Ultralytics YOLO plugin for Flutter only supports the models exported using the commands above. If you use different parameters, the plugin will not work as expected. We're working on adding support for more models and parameters in the future.
The following commands are used to export the models:
After exporting the models, you will get the .tflite
and .mlmodel
files. Include these files in your app's assets
folder.
Ensure that you have the necessary permissions to access the camera and storage.
Create a predictor object using the LocalYoloModel
class. This class requires the following parameters:
final model = LocalYoloModel(
id: id,
task: Task.detect /* or Task.classify */,
format: Format.tflite /* or Format.coreml*/,
modelPath: modelPath,
metadataPath: metadataPath,
);
final objectDetector = ObjectDetector(model: model);
await objectDetector.loadModel();
final imageClassifier = ImageClassifier(model: model);
await imageClassifier.loadModel();
The UltralyticsYoloCameraPreview
widget is used to display the camera preview and the results of the prediction.
final _controller = UltralyticsYoloCameraController();
UltralyticsYoloCameraPreview(
predictor: predictor, // Your prediction model data
controller: _controller, // Ultralytics camera controller
// For showing any widget on screen at the time of model loading
loadingPlaceholder: Center(
child: Wrap(
direction: Axis.vertical,
crossAxisAlignment: WrapCrossAlignment.center,
children: [
const CircularProgressIndicator(
color: Colors.white,
strokeWidth: 2,
),
const SizedBox(height: 20),
Text(
'Loading model...',
style: theme.typography.base.copyWith(
color: Colors.white,
fontSize: 14,
),
Use the detect
or classify
methods to get the results of the prediction on an image.
objectDetector.detect(imagePath: imagePath)
or
imageClassifier.classify(imagePath: imagePath)
Ultralytics thrives on community collaboration; we immensely value your involvement! We urge you to peruse our Contributing Guide for detailed insights on how you can participate. Don't forget to share your feedback with us by contributing to our Survey. A heartfelt thank you 🙏 goes out to everyone who has already contributed!
Ultralytics presents two distinct licensing paths to accommodate a variety of scenarios:
For bugs or feature suggestions pertaining to Ultralytics, please lodge an issue via GitHub Issues. You're also invited to participate in our Discord community to engage in discussions and seek advice!