LevinJ / SSD_tensorflow_VOC

Apache License 2.0
60 stars 31 forks source link

why mAP is higher than paper? #8

Closed CangHaiQingYue closed 7 years ago

CangHaiQingYue commented 7 years ago

I used the "ssd_300_vgg.cpkt" fine-tuned in the voc2007_train dataset, at step 57771, got mAP: AP_VOC07/mAP[0.83292667058394565]

But in the paper, the mAP is 81.6(07+12+coco_train)。 what wrong with my data...

LevinJ commented 7 years ago

Hmm, That's interesting. Can you post the training/evaluation accuracy chart here so that I can take a closer look?

Also, just to confirm,

  1. You used the ssd_300_vgg.cpkt to fine tune, instead of the VGG pretrained with Imagenet as I did.
  2. You didn't modify the source codes in this project (except those minor changes pointed out in readme).
  3. 81.6% is is the validation accuracy, instead of training accuracy.
CangHaiQingYue commented 7 years ago

ehh, what I said is in balancap's work, not yours. I'm runing your code.Fellow are the shell I used: `DATASET_DIR=../tf_train

TRAIN_DIR=.././log_files/log_0.5/

CHECKPOINT_PATH=../checkpoints/ssd_300_vgg.ckpt

python3 ../train_ssd_network.py \ --train_dir=${TRAIN_DIR} \ --dataset_dir=${DATASET_DIR} \ --dataset_name=pascalvoc_2007 \ --dataset_split_name=train \ --model_name=ssd_300_vgg \ --checkpoint_path=${CHECKPOINT_PATH} \ --save_summaries_secs=60 \ --save_interval_secs=600 \ --weight_decay=0.0005 \ --optimizer=adam \ --learning_rate=0.001 \ --batch_size=16
`