kevinlin311tw / Caffe-DeepBinaryCode

Supervised Semantics-preserving Deep Hashing (TPAMI18)
https://arxiv.org/abs/1507.00101v2
Other
205 stars 83 forks source link

Cannot converge when I switch cifar10 to caltec101 dataset #9

Closed wangzhenhua2015 closed 8 years ago

wangzhenhua2015 commented 8 years ago

Help me figure it out, thanks. Below is the steps I followed to conduct my experiment. A: I split caltec101 to two disjoint dataset: 1, train.txt with 7357 images 2, val.txt with 1788 images then I pack them into leveldb by run create_imagenet.sh. B: I train the net followed the step when I train on cifar10. I did not modified the training and caffe net param and use 48 bits as default

C: When I train the model, I noticed the log show as below. That was not I want contrast to when I finished training on cifar10 I got "Test net output #0: accuracy = 0.903437".

FYI

I0903 21:10:34.141065 29794 solver.cpp:317] Iteration 50000, loss = 15.486 I0903 21:10:34.141099 29794 solver.cpp:337] Iteration 50000, Testing net (#0) I0903 21:10:41.981957 29794 solver.cpp:404] Test net output #0: accuracy = 0.0915625 I0903 21:10:41.982012 29794 solver.cpp:404] Test net output #1: loss: 50%-fire-rate = 0.00132921 (* 1 = 0.00132921 loss) I0903 21:10:41.982022 29794 solver.cpp:404] Test net output #2: loss: classfication-error = 12.1441 (* 1 = 12.1441 loss) I0903 21:10:41.982028 29794 solver.cpp:404] Test net output #3: loss: forcing-binary = -0.00390625 (* 1 = -0.00390625 loss) I0903 21:10:41.982033 29794 solver.cpp:322] Optimization Done. I0903 21:10:41.982038 29794 caffe.cpp:254] Optimization Done.

kevinlin311tw commented 8 years ago

Since caltech101 dataset has 101 object categories, you need to modify the softmax layer to 101 nodes.

Sent from my iPhone

On 2016年9月4日, at 上午8:13, wangzhenhua2015 notifications@github.com wrote:

Help me figure it out, thanks. Below is the steps I followed to conduct my experiment. A: I split caltec101 to two disjoint dataset: 1, train.txt with 7357 images 2, val.txt with 1788 images then I pack them into leveldb by run create_imagenet.sh. B: I train the net followed the step when I train on cifar10. I did not modified the training and caffe net param and use 48 bits as default

C: When I train the model, I noticed the log show as below. That was not I want like I finished training on cifar10 I got "Test net output #0: accuracy = 0.903437".

FYI

I0903 21:10:34.141065 29794 solver.cpp:317] Iteration 50000, loss = 15.486 I0903 21:10:34.141099 29794 solver.cpp:337] Iteration 50000, Testing net (#0) I0903 21:10:41.981957 29794 solver.cpp:404] Test net output #0: accuracy = 0.0915625 I0903 21:10:41.982012 29794 solver.cpp:404] Test net output #1: loss: 50%-fire-rate = 0.00132921 (* 1 = 0.00132921 loss) I0903 21:10:41.982022 29794 solver.cpp:404] Test net output #2: loss: classfication-error = 12.1441 (* 1 = 12.1441 loss) I0903 21:10:41.982028 29794 solver.cpp:404] Test net output #3: loss: forcing-binary = -0.00390625 (* 1 = -0.00390625 loss) I0903 21:10:41.982033 29794 solver.cpp:322] Optimization Done. I0903 21:10:41.982038 29794 caffe.cpp:254] Optimization Done.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

wangzhenhua2015 commented 8 years ago

Solved. Thanks.

I got an accuracy of near 0.9 after I changed the softmax layer followed by your advice and conduct a new experiment.