google-ai-edge / mediapipe

Cross-platform, customizable ML solutions for live and streaming media.
https://ai.google.dev/edge/mediapipe
Apache License 2.0
27.11k stars 5.12k forks source link

Motion saliency computing is not working #1907

Closed zzmicer closed 3 years ago

zzmicer commented 3 years ago

Hello, I'm trying to visualize motion saliency points using MotionAnalysisCalculator. When I set compute_motion_saliency:true, running process get stuck at first frame processing(LOG is attached). How can I fix this error or bug?

Graph config:

input_stream: "VIDEO:input_video"
output_stream: "VIDEO:output_video"

# Performs motion analysis on an incoming video stream.
node: {
  calculator: "MotionAnalysisCalculator"
  input_stream: "VIDEO:input_video"
  output_stream: "FLOW:region_flow"
  output_stream: "CAMERA:camera_motion"
  output_stream: "VIZ:motion_analysis_viz"

  node_options: {
    [type.googleapis.com/mediapipe.MotionAnalysisCalculatorOptions]: {
      analysis_options {
        analysis_policy: ANALYSIS_POLICY_CAMERA_MOBILE

        saliency_options{
          scale_weight_by_flow_magnitude:true
          use_only_foreground_regions:true
        }

        visualization_options{
          visualize_salient_points:true
          visualize_region_flow_features:true
          foreground_jet_coloring:true
          visualize_stats:false
        }

        flow_options {
          fast_estimation_min_block_size: 100
          top_inlier_sets: 1
          frac_inlier_error_threshold: 3e-3
          verification_distance: 5.0
          verify_long_feature_acceleration: true
          verify_long_feature_trigger_ratio: 0.1
          tracking_options {
            max_features: 500
            adaptive_extraction_levels: 2
            min_eig_val_settings {
              adaptive_lowest_quality_level: 2e-4
            }
            klt_tracker_implementation: KLT_OPENCV
          }
        }
        compute_motion_saliency:true
      }
    }
  }
}

# Reads optical flow fields defined in
# mediapipe/framework/formats/motion/optical_flow_field.h,
# returns a VideoFrame with 2 channels (v_x and v_y), each channel is quantized
# to 0-255.
node: {
  calculator: "FlowPackagerCalculator"
  input_stream: "FLOW:region_flow"
  input_stream: "CAMERA:camera_motion"
  output_stream: "TRACKING:tracking_data"

  node_options: {
    [type.googleapis.com/mediapipe.FlowPackagerCalculatorOptions]: {
      flow_packager_options: {
        binary_tracking_data_support: false
      }
    }
  }
}

# Tracks box positions over time.
node: {
  calculator: "BoxTrackerCalculator"
  input_stream: "TRACKING:tracking_data"
  input_stream: "VIDEO:motion_analysis_viz"
  output_stream: "VIZ:output_viz"
  output_stream: "BOXES:output_boxes"

  node_options: {
    [type.googleapis.com/mediapipe.BoxTrackerCalculatorOptions]: {
      tracker_options: {
        track_step_options {
          track_object_and_camera: true
          tracking_degrees: TRACKING_DEGREE_OBJECT_SCALE
          inlier_spring_force: 0.0
          static_motion_temporal_ratio: 3e-2
        }
      }
        visualize_tracking_data: false
        visualize_state: true
        visualize_internal_state: true
      streaming_track_data_cache_size: 100
    }
  }
}

node: {
  calculator: "TrackedDetectionManagerCalculator"
  input_stream: "TRACKING_BOXES:output_boxes"
  output_stream: "DETECTIONS:tracked_detections"
}

node {
  calculator: "RendererSubgraphCpu"
  input_stream: "IMAGE:output_viz"
  input_stream: "DETECTIONS:tracked_detections"
  output_stream: "IMAGE:output_video"
}

Screenshot from 2021-04-19 14-56-35

LOG:

I20210419 14:52:31.599285  2348 demo_run_graph_main.cc:54] Initialize the calculator graph.
I20210419 14:52:31.600528  2348 demo_run_graph_main.cc:58] Initialize the camera or load the video.
[ WARN:0] global /home/zmicer/opencv_build/opencv/modules/videoio/src/cap_gstreamer.cpp (1081) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
I20210419 14:52:34.229313  2348 demo_run_graph_main.cc:79] Start running the calculator graph.
I20210419 14:52:34.229727  2348 demo_run_graph_main.cc:84] Start grabbing and processing frames.
W20210419 14:52:34.229823  2352 motion_analysis_calculator.cc:336] No input video header found. Downstream calculators expecting video headers are likely to fail.
I20210419 14:52:34.256707  2349 motion_analysis_calculator.cc:552] Analyzed frame `1

And then it's just get stuck, nothing happens...
sgowroji commented 3 years ago

Hi @KastusKalinovski, Is it an Instant motion tracking solution you are referring above. Can you provide more details to understand this better. Thanks!

zzmicer commented 3 years ago

Hi @sgowroji, not exactly, it is more about Motion Analysis Calculator visualization, I want to visualize motion saliency bounding box(blue one on the following gif) bear I am referring to https://developers.googleblog.com/2019/12/object-detection-and-tracking-using-mediapipe.html

zzmicer commented 3 years ago

Hi, @mcclanahoochie, @hadon, Did you get a chance to go through my issue?

mcclanahoochie commented 3 years ago

I believe you need to add a SALIENCY stream output to capture the results

https://github.com/google/mediapipe/blob/ecb5b5f44ab23ea620ef97a479407c699e424aa7/mediapipe/calculators/video/motion_analysis_calculator.cc#L342

zzmicer commented 3 years ago

@mcclanahoochie, actually, it was the first thing I've tried, the problem remains(nothing happens)

mcclanahoochie commented 3 years ago

related: https://github.com/google/mediapipe/issues/1857

on second thought, the "SALIENCY" stream is only for custom visualization.

i tried modifying the box_tracking_cpu graph to test the visualization, based on what you have above.

setting these other two flags for motion analysis will unfreeze things

         compute_motion_saliency:true
         select_saliency_inliers:false
         filter_saliency:false

but i think these settings are only if you output the VIZ stream from the motion analysis calc (i didn't test this), and not needed for the box tracker (just the saliency output).

setting options

      visualize_tracking_data: true
      visualize_state: true
      visualize_internal_state: true

in the box tracker calc, and adding both the input VIDEO and output VIZ streams, should enable the visualization (based on reading the calculator code), but i also didn't see any extra/debug trails. :/

I would try and see if you are getting these Render* functions correctly triggered https://github.com/google/mediapipe/blob/ecb5b5f44ab23ea620ef97a479407c699e424aa7/mediapipe/calculators/video/box_tracker_calculator.cc#L827

google-ml-butler[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.

google-ml-butler[bot] commented 3 years ago

Are you satisfied with the resolution of your issue? Yes No