tensorflow / models

Models and examples built with TensorFlow
Other
76.97k stars 45.79k forks source link

Trouble evaluating modell during training using model_main_tf2.py #10304

Open Nozoomhs opened 2 years ago

Nozoomhs commented 2 years ago

I am having trouble evalutaing my training process during training a Tensorflow2 Custom Object Detector. After reading several issues related to this problem I found that evaluation and training should be treated as two seperate proccesses therefore I should use a new anaconda prompt for starting the evaluation job. I am training on the ssd_mobilenetv2 640x640 version. I would like to montior evalutaion on tensorboard to see whether my modell is overfitting or not. My pipeline configuration:

model {
  ssd {
    num_classes: 6
    image_resizer {
      fixed_shape_resizer {
        height: 640
        width: 640
      }
    }
    feature_extractor {
      type: "ssd_mobilenet_v2_fpn_keras"
      depth_multiplier: 1.0
      min_depth: 16
      conv_hyperparams {
        regularizer {
          l2_regularizer {
            weight: 3.9999998989515007e-05
          }
        }
        initializer {
          random_normal_initializer {
            mean: 0.0
            stddev: 0.009999999776482582
          }
        }
        activation: RELU_6
        batch_norm {
          decay: 0.996999979019165
          scale: true
          epsilon: 0.0010000000474974513
        }
      }
      use_depthwise: true
      override_base_feature_extractor_hyperparams: true
      fpn {
        min_level: 3
        max_level: 7
        additional_layer_depth: 128
      }
    }
    box_coder {
      faster_rcnn_box_coder {
        y_scale: 10.0
        x_scale: 10.0
        height_scale: 5.0
        width_scale: 5.0
      }
    }
    matcher {
      argmax_matcher {
        matched_threshold: 0.5
        unmatched_threshold: 0.5
        ignore_thresholds: false
        negatives_lower_than_unmatched: true
        force_match_for_each_row: true
        use_matmul_gather: true
      }
    }
    similarity_calculator {
      iou_similarity {
      }
    }
    box_predictor {
      weight_shared_convolutional_box_predictor {
        conv_hyperparams {
          regularizer {
            l2_regularizer {
              weight: 3.9999998989515007e-05
            }
          }
          initializer {
            random_normal_initializer {
              mean: 0.0
              stddev: 0.009999999776482582
            }
          }
          activation: RELU_6
          batch_norm {
            decay: 0.996999979019165
            scale: true
            epsilon: 0.0010000000474974513
          }
        }
        depth: 128
        num_layers_before_predictor: 4
        kernel_size: 3
        class_prediction_bias_init: -4.599999904632568
        share_prediction_tower: true
        use_depthwise: true
      }
    }
    anchor_generator {
      multiscale_anchor_generator {
        min_level: 3
        max_level: 7
        anchor_scale: 4.0
        aspect_ratios: 1.0
        aspect_ratios: 2.0
        aspect_ratios: 0.5
        scales_per_octave: 2
      }
    }
    post_processing {
      batch_non_max_suppression {
        score_threshold: 9.99999993922529e-09
        iou_threshold: 0.6000000238418579
        max_detections_per_class: 100
        max_total_detections: 100
        use_static_shapes: false
      }
      score_converter: SIGMOID
    }
    normalize_loss_by_num_matches: true
    loss {
      localization_loss {
        weighted_smooth_l1 {
        }
      }
      classification_loss {
        weighted_sigmoid_focal {
          gamma: 2.0
          alpha: 0.25
        }
      }
      classification_weight: 1.0
      localization_weight: 1.0
    }
    encode_background_as_zeros: true
    normalize_loc_loss_by_codesize: true
    inplace_batchnorm_update: true
    freeze_batchnorm: false
  }
}
train_config {
  batch_size: 4
  data_augmentation_options {
    random_horizontal_flip {
    }
  }
  #data_augmentation_options {
    #random_crop_image {
      #min_object_covered: 0.0
      #min_aspect_ratio: 0.75
      #max_aspect_ratio: 3.0
      #min_area: 0.75
      #max_area: 1.0
      #overlap_thresh: 0.0
    #}
  #}
  optimizer {
    momentum_optimizer {
      learning_rate {
        cosine_decay_learning_rate {
          learning_rate_base: 0.04999999821186066
          total_steps: 50000
          warmup_learning_rate: 0.0026666000485420227
          warmup_steps: 600
        }
      }
      momentum_optimizer_value: 0.8999999761581421
    }
    use_moving_average: false
  }
  fine_tune_checkpoint: "pre-trained-models\ssd_mobilenet_v2_fpnlite_640x640_coco17_tpu-8\checkpoint\ckpt-0"
  num_steps: 50000
  startup_delay_steps: 0.0
  replicas_to_aggregate: 8
  max_number_of_boxes: 100
  unpad_groundtruth_tensors: false
  fine_tune_checkpoint_type: "detection"
  fine_tune_checkpoint_version: V2
  from_detection_checkpoint: true
}
train_input_reader {
  label_map_path: "annotations/label_map.pbtxt"
  tf_record_input_reader {
    input_path: "data/train.record"
  }

}
eval_config {
  metrics_set: "coco_detection_metrics"
  use_moving_averages: false
}
eval_input_reader {
  label_map_path: "annotations/label_map.pbtxt"
  shuffle: false
  num_epochs: 1
  tf_record_input_reader {
    input_path: "data/test.record"
  }
}

I have started the training with the command :python model_main_tf2.py --model_dir=models/my_ssd2_3/ --pipeline_config_path=models/my_ssd2_3/pipeline.config --sample_1_of_n_eval_examples 1 --logtostderr I was hoping that setting the number of evaluation examples will have an effect of starting the evaluation job. In any case I tried running the evaluation in a different terminal window with : python model_main_tf2.py --model_dir=models/my_ssd2_3 --pipeline_config_path=models/my_ssd2_3/pipeline.config --checkpoint_dir=models/my_ssd2_3/ --alsologtostderr As soon as starting the evaluation the training job crashes with this error:

monitor2

The problem I think that I do not have the sufficient hardware:

8GB RAM NVIDIDA GTX960M (2GB RAM) Could it be a problem that all the input images that I use are 3000x3000, therefore the preprocesser has to load too many information? If so, is there any way to work around it? I would not want to resize all the images before generating TF record file, because I would have to re-label all the images. I clearly lack the insight of how the memory is being allocated during the start of the training process so some details would be much appreciated.


A second question is that during monitoring the training on tensorboard the images are displayed with various brightness I tried changing in the model_lib_v2.py file the 627 line to:

data= (features[fields.InputDataFields.image]-np.min(features[fields.InputDataFields.image]))/(np.max(features[fields.InputDataFields.image])-np.min(features[fields.InputDataFields.image]))


According to this issue Without any luck. Is there a solution to this problem? Also it would be nice if I could monitor there the bounding boxes the modell proposes. Thank you.

girishmk0602 commented 2 years ago

same problem, not able to run train and eval in different terminal.. any way model_main_tf2 can train and eval? in same process?

jacoverster commented 2 years ago

Hi, anyone struggling with this, you have to limit evaluation to CPU only by setting CUDA_VISIBLE_DEVICES=-1 at the start. For example:

UDA_VISIBLE_DEVICES=-1 python .../model_main_tf2.py
--model_dir=...
--pipeline_config_path=.../pipeline.config
--checkpoint_dir=...
--sample_1_of_n_eval_examples=1