kevinlin311tw / Caffe-DeepBinaryCode

Supervised Semantics-preserving Deep Hashing (TPAMI18)
https://arxiv.org/abs/1507.00101v2
Other
206 stars 83 forks source link

No batchnorm layer in your source? #19

Open SarthakYadav opened 7 years ago

SarthakYadav commented 7 years ago

I am trying to use Resnet50. But your src has no files corresponding to batch_norm_layers, therefore the binary has no concept of "batch_norm". How can one build it with batch_norm support? The latest caffe code for the same obviously throws a lot of errors

kevinlin311tw commented 7 years ago

Since this caffe is an old version, it doesn't support batch normalization.

One of the possible solutions is adding our loss functions to a newer caffe. Then, you can modify the last few layers of the deep network, and learn binary codes.

2017-07-20 6:49 GMT-07:00 Sarthak Yadav notifications@github.com:

I am trying to use Resnet50. But your src has no files corresponding to batch_norm_layers, therefore the binary has no concept of "batch_norm". How can one build it with batch_norm support? The latest caffe code for the same obviously throws a lot of errors

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/kevinlin311tw/Caffe-DeepBinaryCode/issues/19, or mute the thread https://github.com/notifications/unsubscribe-auth/AJgtMNwUo6PZOdWNevUHCGtvb9bBGj6Pks5sP1rSgaJpZM4OeIIG .

--

Best regards,

林可昀

Kevin Lin

SarthakYadav commented 7 years ago

I actually added the source code for the loss function implementations. They wouldn't compile with new Caffe versions

kevinlin311tw commented 7 years ago

Sorry for the late reply. I am too busy with current projects.

If you want to generate binary codes quickly and avoid additional implementations, you can just train the model with latent layer + sigmoid activation function + softmax loss (this is exactly our workshop paper). It already learns high-quality and somehow evenly-distributed binary codes.