google-ai-edge / mediapipe

Cross-platform, customizable ML solutions for live and streaming media.
https://ai.google.dev/edge/mediapipe
Apache License 2.0
26.81k stars 5.09k forks source link

Failed to run the graph while running mediapipe application for iris tracking #3359

Closed anvuong48 closed 1 year ago

anvuong48 commented 2 years ago

Hi everyone,

-I am trying to run mediapipe application for tracking iris after building successfully. -My running command line is below: bazel-bin\mediapipe\examples\desktop\iris_tracking\iris_tracking_cpu_video_input --calculator_graph_config_file=mediapipe/graphs/iris_tracking/iris_tracking_cpu_video_input.pbtxt --input_side_packets=input_video_path="D://anv1581s//anv//testvid1.mp4",output_video_path="D://anv1581s//anv//VideosMediapipe//testout4.mp4"

After using this command line, it starts to run the calculator graph. But I faced to this below errors:

image

I created my own tflite model and applied it to this application. But it always returns this error Does anyone have idea to solve this problem? Thank you very much.

Best regards

sureshdagooglecom commented 2 years ago

Hi @anvuong48 , Could you provide steps to reproduce this issue.

anvuong48 commented 2 years ago

Hi @sureshdagooglecom ,

  1. So I used this this below link to install environment https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/install.html

  2. and this link to train and export my model https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html#training-the-model

  3. My Environment settings:

  1. I used ssd_SSD MobileNet v2 320x320 as detector( you can use this link to find this detector: https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/tf2_detection_zoo.md)
  2. I can send you my pipeline.config file below as well for training model

model { ssd { num_classes: 2 image_resizer { fixed_shape_resizer { height: 320 width: 320 } } feature_extractor { type: "ssd_mobilenet_v2_keras" depth_multiplier: 1.0 min_depth: 16 conv_hyperparams { regularizer { l2_regularizer { weight: 3.9999998989515007e-05 } } initializer { truncated_normal_initializer { mean: 0.0 stddev: 0.029999999329447746 } } activation: RELU_6 batch_norm { decay: 0.9700000286102295 center: true scale: true epsilon: 0.0010000000474974513 train: true } } override_base_feature_extractor_hyperparams: true } box_coder { faster_rcnn_box_coder { y_scale: 10.0 x_scale: 10.0 height_scale: 5.0 width_scale: 5.0 } } matcher { argmax_matcher { matched_threshold: 0.5 unmatched_threshold: 0.5 ignore_thresholds: false negatives_lower_than_unmatched: true force_match_for_each_row: true use_matmul_gather: true } } similarity_calculator { iou_similarity { } } box_predictor { convolutional_box_predictor { conv_hyperparams { regularizer { l2_regularizer { weight: 3.9999998989515007e-05 } } initializer { random_normal_initializer { mean: 0.0 stddev: 0.009999999776482582 } } activation: RELU_6 batch_norm { decay: 0.9700000286102295 center: true scale: true epsilon: 0.0010000000474974513 train: true } } min_depth: 0 max_depth: 0 num_layers_before_predictor: 0 use_dropout: false dropout_keep_probability: 0.800000011920929 kernel_size: 1 box_code_size: 4 apply_sigmoid_to_scores: false class_prediction_bias_init: -4.599999904632568 } } anchor_generator { ssd_anchor_generator { num_layers: 6 min_scale: 0.20000000298023224 max_scale: 0.949999988079071 aspect_ratios: 1.0 aspect_ratios: 2.0 aspect_ratios: 0.5 aspect_ratios: 3.0 aspect_ratios: 0.33329999446868896 } } post_processing { batch_non_max_suppression { score_threshold: 9.99999993922529e-09 iou_threshold: 0.6000000238418579 max_detections_per_class: 100 max_total_detections: 100 use_static_shapes: false } score_converter: SIGMOID } normalize_loss_by_num_matches: true loss { localization_loss { weighted_smooth_l1 { delta: 1.0 } } classification_loss { weighted_sigmoid_focal { gamma: 2.0 alpha: 0.75 } } classification_weight: 1.0 localization_weight: 1.0 } encode_background_as_zeros: true normalize_loc_loss_by_codesize: true inplace_batchnorm_update: true freeze_batchnorm: false } } train_config { batch_size: 8 data_augmentation_options { random_horizontal_flip { } } data_augmentation_options { ssd_random_crop { } } data_augmentation_options { random_rotation90{ } } sync_replicas: true optimizer { momentum_optimizer { learning_rate { cosine_decay_learning_rate { learning_rate_base: 0.0300000011920929 total_steps: 50000 warmup_learning_rate: 0.013333000242710114 warmup_steps: 2000 } } momentum_optimizer_value: 0.8999999761581421 } use_moving_average: false } fine_tune_checkpoint: "pre-trained-models/ssd_mobilenet_v2_320x320_coco17_tpu-8/checkpoint/ckpt-0" num_steps: 50000 startup_delay_steps: 0.0 replicas_to_aggregate: 8 max_number_of_boxes: 100 unpad_groundtruth_tensors: false fine_tune_checkpoint_type: "detection" fine_tune_checkpoint_version: V2 } train_input_reader { label_map_path: "annotations/label_map.pbtxt" tf_record_input_reader { input_path: "annotations/train.record" } } eval_config { metrics_set: "coco_detection_metrics" use_moving_averages: false } eval_input_reader { label_map_path: "annotations/label_map.pbtxt" shuffle: false num_epochs: 1 tf_record_input_reader { input_path: "annotations/test.record" } }

  1. After that I exported your model with this link: https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html#exporting-a-trained-model
  2. Then I created new file named convert.py with below content import tensorflow as tf converter = tf.lite.TFLiteConverter.from_saved_model("exported-models/my_ssd_mobilenet_model/saved_model/") tflite_model = converter.convert() with open('tflite_test1.tflite', 'wb') as f: f.write(tflite_model)

-With this convert.py file, I can export my trained model to tflite model

  1. Next step I installed mediapipe for windows 10 following this link: https://google.github.io/mediapipe/getting_started/install.html#installing-on-windows

  2. After installing and running helloworld successfully I can run mediapipe for iris tracking like below

    • I went to this path: D:\anv1581s\mediapipe_repo\mediapipe\mediapipe\models. Then I copied and pasted my tflite file that I got above to this path
    • Open cmd and cd to D:\anv1581s\mediapipe_repo\mediapipe\mediapipe
    • I built mediapipe using this command: bazel build -c opt --define MEDIAPIPE_DISABLE_GPU=1 --action_env PYTHON_BIN_PATH="C:/Users/anv1581s/AppData/Local/Programs/Python/Python310/python.exe" mediapipe/examples/desktop/iris_tracking:iris_tracking_cpu_video_input
    • I ran this command as next step: set GLOG_logtostderr=1
    • Then I ran mediapipe using this command below: bazel-bin\mediapipe\examples\desktop\iris_tracking\iris_tracking_cpu_video_input --calculator_graph_config_file=mediapipe/graphs/iris_tracking/iris_tracking_cpu_video_input.pbtxt --input_side_packets=input_video_path="D://anv1581s//anv//testvid1.mp4",output_video_path="D://anv1581s//anv//VideosMediapipe//testout4.mp4"
google-ml-butler[bot] commented 2 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you.

google-ml-butler[bot] commented 2 years ago

Closing as stale. Please reopen if you'd like to work on this further.

google-ml-butler[bot] commented 2 years ago

Are you satisfied with the resolution of your issue? Yes No

google-ml-butler[bot] commented 2 years ago

Closing as stale. Please reopen if you'd like to work on this further.

google-ml-butler[bot] commented 2 years ago

Are you satisfied with the resolution of your issue? Yes No

kuaashish commented 1 year ago

Hello @anvuong48, We are upgrading the MediaPipe Legacy Solutions to new MediaPipe solutions However, the libraries, documentation, and source code for all the MediapPipe Legacy Solutions will continue to be available in our GitHub repository and through library distribution services, such as Maven and NPM.

You can continue to use those legacy solutions in your applications if you choose. Though, we would request you to check new MediaPipe solutions which can help you more easily build and customize ML solutions for your applications. These new solutions will provide a superset of capabilities available in the legacy solutions. Thank you

github-actions[bot] commented 1 year ago

This issue has been marked stale because it has no recent activity since 7 days. It will be closed if no further activity occurs. Thank you.

github-actions[bot] commented 1 year ago

This issue was closed due to lack of activity after being marked stale for past 7 days.

google-ml-butler[bot] commented 1 year ago

Are you satisfied with the resolution of your issue? Yes No