Closed cubukcum closed 4 days ago
I also cloned your example project. I was looking for an example that detects objects and draws on a live camera feed. However, it seems that the project only displays the camera feed, FPS, and milliseconds on the camera tab. On the gallery tab, it displays the selected image on the screen but does not perform any other actions. The application runs without any errors though, and here are some of the outputs I get in the console:
E/ImageTextureRegistryEntry(32001): Dropping PlatformView Frame
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
E/ImageTextureRegistryEntry(32001): Dropping PlatformView Frame
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
E/ImageTextureRegistryEntry(32001): Dropping PlatformView Frame
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
D/YuvToJpegEncoder(32001): onFlyCompress
D/SurfaceViewImpl(32001): Surface destroyed.
D/SurfaceViewImpl(32001): Surface invalidated androidx.camera.core.SurfaceRequest@2db69c7
D/DeferrableSurface(32001): surface closed, useCount=1 closed=true androidx.camera.core.SurfaceRequest$2@890dbf4
W/WindowOnBackDispatcher(32001): sendCancelIfRunning: isInProgress=falsecallback=android.view.ViewRootImpl$$ExternalSyntheticLambda17@3e42dcd
D/YuvToJpegEncoder(32001): onFlyCompress
E/BufferQueueProducer(32001): [SurfaceView[]#18(BLAST Consumer)18](id:7d0100000031,api:4,p:1117,c:32001) queueBuffer: BufferQueue has been abandoned
E/BufferQueueProducer(32001): [SurfaceView[]#18(BLAST Consumer)18](id:7d0100000031,api:4,p:1117,c:32001) dequeueBuffer: BufferQueue has been abandoned
E/ImageTextureRegistryEntry(32001): Dropping PlatformView Frame
D/YuvToJpegEncoder(32001): onFlyCompress
E/BufferQueueProducer(32001): [SurfaceView[]#18(BLAST Consumer)18](id:7d0100000031,api:4,p:1117,c:32001) queueBuffer: BufferQueue has been abandoned
E/BufferQueueProducer(32001): [SurfaceView[]#18(BLAST Consumer)18](id:7d0100000031,api:4,p:1117,c:32001) queueBuffer: BufferQueue has been abandoned
D/YuvToJpegEncoder(32001): onFlyCompress
E/BufferQueueProducer(32001): [SurfaceView[]#18(BLAST Consumer)18](id:7d0100000031,api:4,p:1117,c:32001) queueBuffer: BufferQueue has been abandoned
E/BufferQueueProducer(32001): [SurfaceView[]#18(BLAST Consumer)18](id:7d0100000031,api:4,p:1117,c:32001) queueBuffer: BufferQueue has been abandoned
D/YuvToJpegEncoder(32001): onFlyCompress
E/BufferQueueProducer(32001): [SurfaceView[]#18(BLAST Consumer)18](id:7d0100000031,api:4,p:1117,c:32001) queueBuffer: BufferQueue has been abandoned
D/YuvToJpegEncoder(32001): onFlyCompress
The predictor is working fine.
I've been analyzing the UltralyticsYoloCameraPreview
and I've detected that the following switch
statetement is the cause of th e non showing bounding boxes:
switch (widget.predictor.runtimeType) {
case ObjectDetector _:
return StreamBuilder(
stream: (widget.predictor! as ObjectDetector)
.detectionResultStream,
builder: (
BuildContext context,
AsyncSnapshot<List<DetectedObject?>?> snapshot,
) {
if (snapshot.data == null) return Container();
return CustomPaint(
painter: ObjectDetectorPainter(
snapshot.data! as List<DetectedObject>,
widget.boundingBoxesColorList,
widget.controller.value.strokeWidth,
),
);
},
);
case ImageClassifier _:
return widget.classificationOverlay ??
StreamBuilder(
stream: (widget.predictor! as ImageClassifier)
.classificationResultStream,
builder: (context, snapshot) {
final classificationResults = snapshot.data;
if (classificationResults == null ||
classificationResults.isEmpty) {
return Container();
}
return ClassificationResultOverlay(
classificationResults: classificationResults,
);
},
);
default:
return Container();
Since that pattern matching is not working as expected (see this issue) I've changed that code in a local copy of the plugin to be this simple if-else statement:
if (widget.predictor is ObjectDetector) {
return StreamBuilder(
stream: (widget.predictor! as ObjectDetector)
.detectionResultStream,
builder: (
BuildContext context,
AsyncSnapshot<List<DetectedObject?>?> snapshot,
) {
if (snapshot.data == null) return Container();
return CustomPaint(
painter: ObjectDetectorPainter(
snapshot.data! as List<DetectedObject>,
widget.boundingBoxesColorList,
widget.controller.value.strokeWidth,
),
);
},
);
} else if (widget.predictor is ImageClassifier) {
return widget.classificationOverlay ??
StreamBuilder(
stream: (widget.predictor! as ImageClassifier)
.classificationResultStream,
builder: (context, snapshot) {
final classificationResults = snapshot.data;
if (classificationResults == null ||
classificationResults.isEmpty) {
return Container();
}
return ClassificationResultOverlay(
classificationResults: classificationResults,
);
},
);
} else {
return Container();
}
And started working OK. Hope they could do this change soon so I can get back my app to use the latest version of this package.
@fstizza hello 👋,
Thank you very much for this detailed analysis and for proposing a solution! It's great to see active community involvement helping to refine and improve the tool.
You're right, the issue you've encountered with the pattern matching in Dart and the switch
statement not working as expected for type checks is acknowledged. Your proposed use of if-else
statements is a good workaround and aligns well with recommended solutions for this Dart language issue.
We'll review the changes you've suggested and consider integrating them into the next update of the plugin. This kind of feedback is invaluable and helps us ensure that the library remains robust and user-friendly. Please keep an eye on the upcoming versions for improvements and fixes!
If you have other suggestions or need further assistance, feel free to share. Happy coding! 😄
I'm really glad I could help!
@fstizza We're equally glad for your contribution, and it's fantastic to have such supportive community members! 😊 Your insights are greatly appreciated, and they play a crucial role in refining and improving our library. Keep the suggestions coming and happy coding! 🚀
@fstizza would you check my code and tell me whats wrong with it? I am trying to run a simple model. I have changed the switch statement as you suggested but still I am getting no boxes.
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
Below is my only code. It is intended to run my object detection model on a live camera feed, but for some reason, it is not functioning as expected and does not produce any errors. The camera opens up with the necessary permissions, displaying a live view without drawing any objects on the screen. I would appreciate it if you could review it and let me know what might be causing this issue.