googlesamples / arcore-ml-sample

Apache License 2.0
75 stars 32 forks source link

trying to use STREAM_MODE #7

Open A7medAbdien opened 2 years ago

A7medAbdien commented 2 years ago

Hello there, I would like to know if you have a version of this app that uses stream_mode? if not i would like to ask how may i do such thing if its possible, and why if its not. kinde regards, a7med

A7medAbdien commented 2 years ago

I am tried to implement Google Machine Learning in ARCore, but using SceneForm rather than OpenGL, and on STREAM_MODE rather than on SINGLE_IMAGE_MODE.

I tried two approaches to get the camera scene (view/scene/cameraLiveData - I don't know what to call it):

  1. acquiring image

    private fun onUpdateFrame(frameTime: FrameTime?) {
        // ! --------------------------- get the recent frame
        val frame = arFragment!!.arSceneView.arFrame
        if (frame == null) {
        return
        }
    
        val cameraImage = frame!!.tryAcquireCameraImage()
        if (cameraImage != null) {
        Log.d("my_test",cameraImage.toString())
        // -------------------------- pass to the MLKit analyzer
        analyzer.analyze(cameraImage,180)
        cameraImage.close()}
    }
  2. capturing arSceneView

    private fun onUpdateFrame(frameTime: FrameTime?) {
        val view: ArSceneView = arFragment!!.arSceneView
        if (view.width > 0 && view.height > 0) {
            var bitmap = Bitmap.createBitmap(
                view!!.width,
                view!!.height,
                Bitmap.Config.ARGB_8888
            )
            PixelCopy.request(view, bitmap, { copyResult ->
                if (copyResult == PixelCopy.SUCCESS) {
                    Log.d("my_frame", "Copying ArFragment view.")
                    // -------------------------- pass to the MLKit analyzer
                    analyzer.analyze(bitmap, 180)
                } else {
                    Log.e("my_frame", "Failed to copy ArFragment view.")
                }
            }, callbackHandler)
        }
    }

Analyzers

The analyzer of the first approach, acquiring image:

fun analyze(image: Image, imageRotation: Int) {
    val inputImage = InputImage.fromMediaImage(image,180)
    detector.process(inputImage).addOnSuccessListener { obj ->
        Log.d("my_obj", obj.size.toString())
        for (result in obj) {
        Log.d(
          "my_re",
          (result.boundingBox.exactCenterX().toInt() to result.boundingBox.exactCenterY()
            .toInt()).toString()
        )
      }
    }.addOnFailureListener { e ->
      Log.e("my_dete",e.toString())
    }
}

it shows me an error that,

com.google.mlkit.common.MlKitException: Internal error has occurred when executing ML Kit tasks

The analyzer of the second approach, Capturing arSceneView:

fun analyze(image: Bitmap, imageRotation: Int) {
    val inputImage = InputImage.fromBitmap(image, 0)
    Log.d("my_dete",detector.toString())
    detector.process(inputImage).addOnSuccessListener { obj ->
        // ------------------------ shows a size of 0 all the times
        Log.d("my_obj", obj.size.toString())
        for (result in obj) {
            Log.d(
                "my_re",
                (result.boundingBox.exactCenterX().toInt() to result.boundingBox.exactCenterY()
                    .toInt()).toString()
            )
        }
    }.addOnFailureListener { e ->
        Log.d("my_dete", e.toString())
    }
}

nothing got detected, I got this approach form Shibui Yusuk

Summary

I get an error because of the image format, in the first approach.

In this application you uses YuvToRgbConverter then pass the image as a bitmap, but it need the use of OpenGL, and I do not know how to implement the Streaming mode on it.

In the second approach, capturing arSceneView, I don't know why I am not getting any results