Open LXWDL opened 6 years ago
I'm not sure if the files I uploaded before were correct. I have retested and uploaded the models. Please try them again. If you start the test with the model file I provided, the result should be the same as the result of my experiment.
./build/tools/caffe train --solver=models/release/pelee_coco_voc/solver.prototxt --weights=models/release/pelee_coco_voc/pelee_304x304_acc7637.caffemodel
I'm not sure if the files I uploaded before were correct. I have retested and uploaded the models. Please try them again. If you start the test with the model file I provided, the result should be the same as the result of my experiment.
./build/tools/caffe train --solver=models/release/pelee_coco_voc/solver.prototxt --weights=models/release/pelee_coco_voc/pelee_304x304_acc7637.caffemodel
why use "train" to test model "pelee_304x304_acc7637.caffemodel", i use your cmd, the log show "Finetuning from....",like this :
i do not know why
I get lower score upon ./build/tools/caffe train --solver=models/release/pelee_coco_voc/solver.prototxt --weights=models/release/pelee_coco_voc/pelee_304x304_acc7637.caffemodel
I0322 03:14:51.507519 267 caffe.cpp:155] Finetuning from models/pelee_coco_voc/pelee_304x304_acc7637.caffemodel I0322 03:14:51.560392 267 upgrade_proto.cpp:77] Attempting to upgrade batch norm layers using deprecated params: models/pelee_coco_voc/pelee_304x304_acc7637.caffemodel I0322 03:14:51.560518 267 upgrade_proto.cpp:80] Successfully upgraded batch norm layers using deprecated params. I0322 03:14:51.616464 267 upgrade_proto.cpp:77] Attempting to upgrade batch norm layers using deprecated params: models/pelee_coco_voc/pelee_304x304_acc7637.caffemodel I0322 03:14:51.616597 267 upgrade_proto.cpp:80] Successfully upgraded batch norm layers using deprecated params. I0322 03:14:51.622704 267 net.cpp:761] Ignoring source layer mbox_loss I0322 03:14:51.624756 267 caffe.cpp:251] Starting Optimization I0322 03:14:51.624775 267 solver.cpp:294] Solving pelee_SSD_304x304_train I0322 03:14:51.624799 267 solver.cpp:295] Learning Rate Policy: multistep I0322 03:14:52.162921 267 solver.cpp:332] Iteration 0, loss = 3.49044 I0322 03:14:52.162971 267 solver.cpp:433] Iteration 0, Testing net (#0) I0322 03:14:52.196039 267 net.cpp:693] Ignoring source layer mbox_loss I0322 03:14:58.536139 267 blocking_queue.cpp:50] Data layer prefetch queue empty I0322 03:15:54.998646 267 solver.cpp:546] Test net output #0: detection_eval = 0.619632 I0322 03:15:54.999035 267 solver.cpp:337] Optimization Done. I0322 03:15:54.999047 267 caffe.cpp:254] Optimization Done
any thoughts?
Hi;
I was able to get the score @Robert-JunWang mentions. There was something wrong with the LMBD I created.
Hello, I'm testing on the voc using the trained model you provided. The result of the pelee_SSD_304x304_iter_112000.caffemodel model map is 0.701925, and the map given in your thesis is 70.9. The same pelee_304x304_voc_coco_iter2k_7637.caffemodel model test map for 0.710299, your result is 76.4. I don't know if there is a problem with my test or what other skills? Does anyone have the same problem? Thank you.