Open knmac opened 6 years ago
Hi, I figured out that it was because of the threshold being set to 0.7 in faster_rcnn/test_net.py
. Changing the threshold to 0 allows me to see lower APs. However, the APs are really low. Here is what I have, using the default training and testing code:
Results:
0.002
0.032
0.003
0.009
0.002
0.002
0.031
0.004
0.004
0.005
0.002
0.007
0.021
0.004
0.044
0.002
0.009
0.003
0.002
0.002
0.010
~~~~~~~~
Here is my train command:
python ./faster_rcnn/train_net.py \
--gpu 0 \
--weights ./data/pretrain_model/Resnet50.npy \
--imdb voc_2007_trainval \
--iters 150000 \
--cfg ./experiments/cfgs/faster_rcnn_end2end_resnet.yml \
--network Resnet50_train \
--set EXP_DIR exp_dir_resnet50
and my test command:
python ./faster_rcnn/test_net.py \
--gpu 0 \
--weights ./output/exp_dir_resnet50/voc_2007_trainval \
--imdb voc_2007_test \
--cfg ./experiments/cfgs/faster_rcnn_end2end_resnet.yml \
--network Resnet50_test
I already ran the test code inside lib/deform_conv_layer
and lib/deform_psroi_pooling_layer
and compare with my installation of MxNet version. The results and gradients are similar (except some small numerical issures, i.e. some entries in the gradient are different by a factor of 1e-6
). So it looks like the libraries are built correctly.
Do you have any suggestion of what could be the issues here? Thank you.
Hi, I figured out that it was because of the threshold being set to 0.7 in
faster_rcnn/test_net.py
. Changing the threshold to 0 allows me to see lower APs. However, the APs are really low. Here is what I have, using the default training and testing code:Results: 0.002 0.032 0.003 0.009 0.002 0.002 0.031 0.004 0.004 0.005 0.002 0.007 0.021 0.004 0.044 0.002 0.009 0.003 0.002 0.002 0.010 ~~~~~~~~
Here is my train command:
python ./faster_rcnn/train_net.py \ --gpu 0 \ --weights ./data/pretrain_model/Resnet50.npy \ --imdb voc_2007_trainval \ --iters 150000 \ --cfg ./experiments/cfgs/faster_rcnn_end2end_resnet.yml \ --network Resnet50_train \ --set EXP_DIR exp_dir_resnet50
and my test command:
python ./faster_rcnn/test_net.py \ --gpu 0 \ --weights ./output/exp_dir_resnet50/voc_2007_trainval \ --imdb voc_2007_test \ --cfg ./experiments/cfgs/faster_rcnn_end2end_resnet.yml \ --network Resnet50_test
I already ran the test code inside
lib/deform_conv_layer
andlib/deform_psroi_pooling_layer
and compare with my installation of MxNet version. The results and gradients are similar (except some small numerical issures, i.e. some entries in the gradient are different by a factor of1e-6
). So it looks like the libraries are built correctly.Do you have any suggestion of what could be the issues here? Thank you.
ha
Hi, I figured out that it was because of the threshold being set to 0.7 in
faster_rcnn/test_net.py
. Changing the threshold to 0 allows me to see lower APs. However, the APs are really low. Here is what I have, using the default training and testing code:Results: 0.002 0.032 0.003 0.009 0.002 0.002 0.031 0.004 0.004 0.005 0.002 0.007 0.021 0.004 0.044 0.002 0.009 0.003 0.002 0.002 0.010 ~~~~~~~~
Here is my train command:
python ./faster_rcnn/train_net.py \ --gpu 0 \ --weights ./data/pretrain_model/Resnet50.npy \ --imdb voc_2007_trainval \ --iters 150000 \ --cfg ./experiments/cfgs/faster_rcnn_end2end_resnet.yml \ --network Resnet50_train \ --set EXP_DIR exp_dir_resnet50
and my test command:
python ./faster_rcnn/test_net.py \ --gpu 0 \ --weights ./output/exp_dir_resnet50/voc_2007_trainval \ --imdb voc_2007_test \ --cfg ./experiments/cfgs/faster_rcnn_end2end_resnet.yml \ --network Resnet50_test
I already ran the test code inside
lib/deform_conv_layer
andlib/deform_psroi_pooling_layer
and compare with my installation of MxNet version. The results and gradients are similar (except some small numerical issures, i.e. some entries in the gradient are different by a factor of1e-6
). So it looks like the libraries are built correctly.Do you have any suggestion of what could be the issues here? Thank you.
I got same problem, have you solved it?
Hello,
I follow the tutorial to train the network with ResNet50 setting. The training runs from 125K iterations and the losses are reduced (final loss is around 1.9). But when I test the model, there are negative APs on multiple classes:
Do you what could be the problem? Thank you very much.