Open Mazharul-Hossain opened 5 years ago
Did you find any script for that?
Facing same issue :(
I asked the same question on Stackoverflow I got an answer there. I have tried it once, however, I can not ensure it works perfectly.
@tispratik @Mazharul-Hossain Can anyone of you share your code of evaluating the TensorFlow object detection model on Validation set? Thank you :-)
@tispratik @Mazharul-Hossain Can anyone of you share your code of evaluating the TensorFlow object detection model on Validation set? Thank you :-)
@Krishna2709 did you find anything?
any update on this?
@Mazharul-Hossain why in my case with model_main.py the test frames are evaluated I don't get it. I use the following commands:
set CONFIG_FILE=C:\Users\petros.katsoulakos\models2\models-master\research\object_detection\training\ssd_mobilenet_v2_quantized_300x300_coco_custom_aspect_ratios.config set OUTPUT_DIR=C:\Users\petros.katsoulakos\models2\models-master\research\object_detection\tensorboard_outputs\after_training\eval_train_data set CHECKPOINT_PATH= C:\Users\petros.katsoulakos\models2\models-master\research\object_detection\training\model.ckpt-200000
python model_main.py --pipeline_config_path=%CONFIG_FILE% --model_dir=%OUTPUT_DIR% --eval_training_data=True --checkpoint_dir=%CHECKPOINT_PATH% --run_once=True
Must I change the eval_input_reader
in the config file to train.record instead of test.record? Furthermore the command doesn't recognize, that I already have model checkpoints and start from beginning every time to train....
System information
Describe the problem
To identify overfitting, I need both performances on training data and validation data. I am using my data. And my mask_rcnn_resnet101_atrous_coco NN is not performing well on the validation dataset. So, I wanted to know the performance of NN on training data and validation data during a training session.
I found similar problems on StackOverflow but no solution.
I used
--eval_training_data=True
as a parameter but the run performed worse than run without it. However,--eval_training_data=True
suppose to run evaluation on the training dataset and should have a better precision (mAP) result than the validation dataset. I could not find any option to run the calculation on both datasets at the same time and report them separately during a training session.Source code / logs
Evaluation result with
--eval_training_data=True
included:Evaluation result without
--eval_training_data
:How to run the script to compute accuracy (mAP) both on the validation set and the training set (randomly some percentage) and report them separately during a training session?