technicolor-research / subic

Tensorflow implementation of a supervised approach to learn highly compressed image representations
Other
27 stars 4 forks source link

Low mAP on CIFAR10 #2

Open pedro-morgado opened 6 years ago

pedro-morgado commented 6 years ago

I was trying to replicate the results in the paper on CIFAR10, but I'm having a problem deploying the trained model. While training, I can get up to 70% training accuracy. However, during retrieval, the mAP is about 23%, which is much lower than the results reported on the paper. Do you have any idea about what can be causing this problem, or how can I go about debugging the error? Thanks

himalayajain commented 6 years ago

Hi,

More details of the code and/or info on training loss, accuracy, code entropy are required to identify the problem. But with the details you give, the things come to my mind are 1) Network. I used 3 conv + 2 FC layers. 1st FC layer gives 500-dimensional vector followed by the encoder layer (FC+block-softmax) giving MK dimensional vector for M*log_2(K) bits. Ex. for 12 bits, it could be M=2 and K=64 then encoder layer gives 128D vector.

2) Entropy losses. You must get low mean entropy and high batch entropy for the learned representation to work well at test time.

Hope this helps.

SikaStar commented 5 years ago

I was trying to replicate the results in the paper on CIFAR10, but I'm having a problem deploying the trained model. While training, I can get up to 70% training accuracy. However, during retrieval, the mAP is about 23%, which is much lower than the results reported on the paper. Do you have any idea about what can be causing this problem, or how can I go about debugging the error? Thanks

Hi, pedro! I have come across the same problem. have you solved it?