I am currently using the Faster_RCNN model to perform object detection on my own dataset. I am using coco_evaluation metrics and iou_threshold: 0.6, max_detection_per_class: 100, max_total_detections:300 parameter values in the Faster_RCNN model config file.
When evaluating the trained Faster_RCNN model on my dataset, I am getting a 0.55 mAP for large objects. However, when I perform a prediction directly on images, it looks better than the actual mAP score I am getting from the evaluation. I suspect the evaluation metrics and parameters might be required to change to fit my problem. Does anyone have similar experience dealing with this problem? If I need to change evaluation metrics or parameters in the config file, what changes can be made to get a better mAP?
Hello,
I am currently using the Faster_RCNN model to perform object detection on my own dataset. I am using coco_evaluation metrics and iou_threshold: 0.6, max_detection_per_class: 100, max_total_detections:300 parameter values in the Faster_RCNN model config file.
When evaluating the trained Faster_RCNN model on my dataset, I am getting a 0.55 mAP for large objects. However, when I perform a prediction directly on images, it looks better than the actual mAP score I am getting from the evaluation. I suspect the evaluation metrics and parameters might be required to change to fit my problem. Does anyone have similar experience dealing with this problem? If I need to change evaluation metrics or parameters in the config file, what changes can be made to get a better mAP?
Thank you so much for your help!
I have written my complete config file below.
model { faster_rcnn { num_classes: 6 image_resizer { keep_aspect_ratio_resizer { min_dimension: 600 max_dimension: 1024 } } feature_extractor { type: 'faster_rcnn_inception_v2' first_stage_features_stride: 16 } first_stage_anchor_generator { grid_anchor_generator { scales: [0.25, 0.5, 1.0, 2.0] aspect_ratios: [0.5, 1.0, 2.0] height_stride: 16 width_stride: 16 } } first_stage_box_predictor_conv_hyperparams { op: CONV regularizer { l2_regularizer { weight: 0.0 } } initializer { truncated_normal_initializer { stddev: 0.01 } } } first_stage_nms_score_threshold: 0.0 first_stage_nms_iou_threshold: 0.7 first_stage_max_proposals: 300 first_stage_localization_loss_weight: 2.0 first_stage_objectness_loss_weight: 1.0 initial_crop_size: 14 maxpool_kernel_size: 2 maxpool_stride: 2 second_stage_box_predictor { mask_rcnn_box_predictor { use_dropout: false dropout_keep_probability: 1.0 fc_hyperparams { op: FC regularizer { l2_regularizer { weight: 0.0 } } initializer { variance_scaling_initializer { factor: 1.0 uniform: true mode: FAN_AVG } } } } } second_stage_post_processing { batch_non_max_suppression { score_threshold: 0.0 iou_threshold: 0.6 max_detections_per_class: 100 max_total_detections: 300 } score_converter: SOFTMAX } second_stage_localization_loss_weight: 2.0 second_stage_classification_loss_weight: 1.0 } }
train_config: { batch_size: 1 optimizer { momentum_optimizer: { learning_rate: { manual_step_learning_rate { initial_learning_rate: 0.0002 schedule { step: 900000 learning_rate: .00002 } schedule { step: 1200000 learning_rate: .000002 } } } momentum_optimizer_value: 0.9 } use_moving_average: false } gradient_clipping_by_norm: 10.0 fine_tune_checkpoint: "/home/models/research/object_detection/faster_rcnn_inception_v2_coco_2018_01_28/model.ckpt" from_detection_checkpoint: true load_all_detection_checkpoint_vars: true
num_steps: 200000 data_augmentation_options { random_horizontal_flip { } } }
train_input_reader: { tf_record_input_reader { input_path: "/data/datasets/mydataset/train_clear_day/train_clearday*.swedentfrecord" } label_map_path: "/home/models/research/object_detection/data/mydata_label_map.pbtxt" }
eval_config: { metrics_set: "coco_detection_metrics" num_examples: 200 max_evals: 10 }
eval_input_reader: { tf_record_input_reader { input_path: "/data/datasets/mydataset/val_clear_day/val_clearday*.swedentfrecord" } label_map_path: "/home/models/research/object_detection/data/mydata_label_map.pbtxt" shuffle: false num_readers: 1 }