google-ai-edge / mediapipe

Cross-platform, customizable ML solutions for live and streaming media.
https://mediapipe.dev
Apache License 2.0
26.8k stars 5.09k forks source link

Issue with single bitmap image on face effect #1352

Closed Arul2962 closed 3 years ago

Arul2962 commented 3 years ago

I am Trying to send single bitmap image instead of video in graph for face effect

I am getting the input image from user on picking image from gallery.

Converted the imageframe to gpubuffer using the ImageFrameToGpuBufferCalculator calculator.

Build compiles successfully,output image alone not displayed but tap to change face effect text is displaying in screen.

Please guide me with the implementation procedure to render the face effect on single image

While verifying with the log it has only one error that loops endlessly, similar same error faced in hair segmentation but the output is rendered properly raised also an issue for that #1328

Logcat error:

2020-11-24 18:34:11.657 500-1253/? E/FrameProcessor: Mediapipe error: 
    com.google.mediapipe.framework.MediaPipeException: internal: 
        at com.google.mediapipe.framework.Graph.nativeMovePacketToInputStream(Native Method)
        at com.google.mediapipe.framework.Graph.addConsumablePacketToInputStream(Graph.java:360)
        at com.google.mediapipe.components.FrameProcessor.onNewFrame(FrameProcessor.java:442)
        at com.google.mediapipe.components.ExternalTextureConverter$RenderThread.renderNext(ExternalTextureConverter.java:350)
        at com.google.mediapipe.components.ExternalTextureConverter$RenderThread.lambda$onFrameAvailable$0$ExternalTextureConverter$RenderThread(ExternalTextureConverter.java:295)
        at com.google.mediapipe.components.ExternalTextureConverter$RenderThread$$Lambda$0.run(Unknown Source:4)
        at android.os.Handler.handleCallback(Handler.java:873)
        at android.os.Handler.dispatchMessage(Handler.java:99)
        at android.os.Looper.loop(Looper.java:224)
        at com.google.mediapipe.glutil.GlThread.run(GlThread.java:141)
2020-11-24 18:34:11.675 1159-1296/? D/WindowManager: intercept win = Window{5b55d2 u0 com.google.mediapipe.apps.staticimagehairsegmentation/com.google.mediapipe.apps.staticimagehairsegmentation.MainActivity}

Attaching the graph for reference

Graph:

# MediaPipe graph that applies a face effect to the input video stream.

# GPU buffer. (GpuBuffer)
input_stream: "bitmap_image_stream"

# Boolean flag, which indicates whether the Facepaint effect is selected. (bool)
#
# If `true`, the Facepaint effect will be rendered.
# If `false`, the Glasses effect will be rendered.
input_stream: "is_facepaint_effect_selected"

# Output image with rendered results. (GpuBuffer)
output_stream: "output_video"

# A list of geometry data for a single detected face.
#
# NOTE: there will not be an output packet in this stream for this particular
# timestamp if none of faces detected.
#
# (std::vector<face_geometry::FaceGeometry>)
output_stream: "multi_face_geometry"

# Throttles the images flowing downstream for flow control. It passes through
# the very first incoming image unaltered, and waits for downstream nodes
# (calculators and subgraphs) in the graph to finish their tasks before it
# passes through another image. All images that come in while waiting are
# dropped, limiting the number of in-flight images in most part of the graph to
# 1. This prevents the downstream nodes from queuing up incoming images and data
# excessively, which leads to increased latency and memory usage, unwanted in
# real-time mobile applications. It also eliminates unnecessarily computation,
# e.g., the output produced by a node may get dropped downstream if the
# subsequent nodes are still busy processing previous inputs.
node: {
  calculator: "ImageFrameToGpuBufferCalculator"
  input_stream: "bitmap_image_stream"
  output_stream: "input_video"
}
node {
  calculator: "FlowLimiterCalculator"
  input_stream: "input_video"
  input_stream: "FINISHED:output_video"
  input_stream_info: {
    tag_index: "FINISHED"
    back_edge: true
  }
  output_stream: "throttled_input_video"
}

# Generates an environment that describes the current virtual scene.
node {
  calculator: "FaceGeometryEnvGeneratorCalculator"
  output_side_packet: "ENVIRONMENT:environment"
  node_options: {
    [type.googleapis.com/mediapipe.FaceGeometryEnvGeneratorCalculatorOptions] {
      environment: {
        origin_point_location: TOP_LEFT_CORNER
        perspective_camera: {
          vertical_fov_degrees: 63.0  # 63 degrees
          near: 1.0  # 1cm
          far: 10000.0  # 100m
        }
      }
    }
  }
}

# Subgraph that detects a single face and corresponding landmarks. The landmarks
# are also "smoothed" to achieve better visual results.
node {
  calculator: "SingleFaceSmoothLandmarkGpu"
  input_stream: "IMAGE:throttled_input_video"
  output_stream: "LANDMARKS:multi_face_landmarks"
}

# Extracts the throttled input video frame dimensions as a separate packet.
node {
  calculator: "ImagePropertiesCalculator"
  input_stream: "IMAGE_GPU:throttled_input_video"
  output_stream: "SIZE:input_video_size"
}

# Subgraph that computes face geometry from landmarks for a single face.
node {
  calculator: "FaceGeometry"
  input_stream: "MULTI_FACE_LANDMARKS:multi_face_landmarks"
  input_stream: "IMAGE_SIZE:input_video_size"
  input_side_packet: "ENVIRONMENT:environment"
  output_stream: "MULTI_FACE_GEOMETRY:multi_face_geometry"
}

# Decides whether to render the Facepaint effect based on the
# `is_facepaint_effect_selected` flag value.
node {
  calculator: "GateCalculator"
  input_stream: "throttled_input_video"
  input_stream: "multi_face_geometry"
  input_stream: "ALLOW:is_facepaint_effect_selected"
  output_stream: "facepaint_effect_throttled_input_video"
  output_stream: "facepaint_effect_multi_face_geometry"
}

# Renders the Facepaint effect.
node {
  calculator: "FaceGeometryEffectRendererCalculator"
  input_side_packet: "ENVIRONMENT:environment"
  input_stream: "IMAGE_GPU:facepaint_effect_throttled_input_video"
  input_stream: "MULTI_FACE_GEOMETRY:facepaint_effect_multi_face_geometry"
  output_stream: "IMAGE_GPU:facepaint_effect_output_video"
  node_options: {
    [type.googleapis.com/mediapipe.FaceGeometryEffectRendererCalculatorOptions] {
      effect_texture_path: "mediapipe/graphs/face_effect/data/facepaint.pngblob"
    }
  }
}

# Decides whether to render the Glasses effect based on the
# `is_facepaint_effect_selected` flag value.
node {
  calculator: "GateCalculator"
  input_stream: "throttled_input_video"
  input_stream: "multi_face_geometry"
  input_stream: "DISALLOW:is_facepaint_effect_selected"
  output_stream: "glasses_effect_throttled_input_video"
  output_stream: "glasses_effect_multi_face_geometry"
}

# Renders the Glasses effect.
node {
  calculator: "FaceGeometryEffectRendererCalculator"
  input_side_packet: "ENVIRONMENT:environment"
  input_stream: "IMAGE_GPU:glasses_effect_throttled_input_video"
  input_stream: "MULTI_FACE_GEOMETRY:glasses_effect_multi_face_geometry"
  output_stream: "IMAGE_GPU:glasses_effect_output_video"
  node_options: {
    [type.googleapis.com/mediapipe.FaceGeometryEffectRendererCalculatorOptions] {
      effect_texture_path: "mediapipe/graphs/face_effect/data/glasses.pngblob"
      effect_mesh_3d_path: "mediapipe/graphs/face_effect/data/glasses.binarypb"
    }
  }
}

# Decides which of the Facepaint or the Glasses rendered results should be sent
# as the output GPU frame.
node {
  calculator: "ImmediateMuxCalculator"
  input_stream: "facepaint_effect_output_video"
  input_stream: "glasses_effect_output_video"
  output_stream: "output_video"
}

Need Help

kiler222 commented 3 years ago

Have you solved that?

TomerJakobovich commented 3 years ago

Have you solved that?

@kiler222 did you manage to solve it yourself?