google-ai-edge / mediapipe

Cross-platform, customizable ML solutions for live and streaming media.
https://mediapipe.dev
Apache License 2.0
26.7k stars 5.07k forks source link

How to feed box tracking calculator with bitmap frames and get results in Android Mediapipe #4678

Closed devveteran closed 1 year ago

devveteran commented 1 year ago

Have I written custom code (as opposed to using a stock example script provided in MediaPipe)

None

OS Platform and Distribution

Android 11

MediaPipe Tasks SDK version

No response

Task name (e.g. Image classification, Gesture recognition etc.)

Box Tracking

Programming Language and version (e.g. C++, Python, Java)

Kotlin

Describe the actual behavior

Hello!

I've built AAR for Mediapipe box tracking and integrated to my Android application.
This application already has functions to read every frames of RTSP, Phone camera and videos within Gallery for detecting specific objects from them.
Now what I want is to feed the tracking engine with these frames and get results(boxes).
I've been trying to figure out that for 2 days but what I've done so far is the following.

val imagePacket: Packet = packetCreator.createRgbaImageFrame(rgbFrameBitmap)
   try {
      if (imagePacket.isEmpty == false) {
          processor.graph.addConsumablePacketToInputStream(
              INPUT_VIDEO_STREAM_NAME,
              imagePacket,
              java.lang.System.currentTimeMillis()
          )
      }
      imagePacket.release()
   } catch(e: java.lang.Exception) {
       e.printStackTrace()
}
Besides, this code occurs error "Graph is not started because of missing streams".

Also, I'm not sure how to set INPUT_VIDEO_STREAM_NAME properly.
According to the examples, for hand tracking, "hand_landmarks" is used, etc.
Where can I get the available values for this input/output stream names?
Thank you in advance!

Describe the expected behaviour

Feed bitmap frames to the tracking engine and get results.

Standalone code/steps you may have used to try to get what you need

val imagePacket: Packet = packetCreator.createRgbaImageFrame(rgbFrameBitmap)
   try {
      if (imagePacket.isEmpty == false) {
          processor.graph.addConsumablePacketToInputStream(
              INPUT_VIDEO_STREAM_NAME,
              imagePacket,
              java.lang.System.currentTimeMillis()
          )
      }
      imagePacket.release()
   } catch(e: java.lang.Exception) {
       e.printStackTrace()
}

Other info / Complete Logs

No response

kuaashish commented 1 year ago

@devveteran,

We have completely ended support of Box Tracking solution as noted here. However, the libraries, documentation, and source code for all the MediapPipe Legacy Solutions will continue to be available in our GitHub repository and through library distribution services, such as Maven and NPM.

You can continue to use those legacy solutions in your applications if you choose But We are not providing support on it.

Thank you!

github-actions[bot] commented 1 year ago

This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you.

github-actions[bot] commented 1 year ago

This issue was closed due to lack of activity after being marked stale for past 7 days.

google-ml-butler[bot] commented 1 year ago

Are you satisfied with the resolution of your issue? Yes No

vidueirof commented 1 month ago

@devveteran have you solved this problem? I'm running face detector, but rtsp stream is really slow compared to device camera. ~1000ms to ~100ms.