Open MModerato opened 7 years ago
It's coarse-level search. In MNIST and CIFAR10 experiments, we evaluate the performance with only binary codes in order to fairly compare with other hashing approaches.
name: "KevinNet_CIFAR10"
layers {
layer {
name: "data"
type: "data"
source: "cifar10_train_leveldb"
meanfile: "../../data/ilsvrc12/imagenet_mean.binaryproto"
batchsize: 32
cropsize: 227
mirror: true
det_context_pad: 16
det_crop_mode: "warp"
det_fg_threshold: 0.5
det_bg_threshold: 0.5
det_fg_fraction: 0.25
}
top: "data"
top: "label"
}
In the train_CIFAR10_48.prototxt
, why cropsize is 227? While the size of image in cifar is 32*32.
And when I used Netscope to view train_CIFAR10_48.prototxt
, the error was reported:
Error Encountered
Uncaught TypeError: Cannot read property 'replace' of undefined
Warning
Can't infer network data shapes. Can't infer output shape of the 'undefined' layer of type 'undefined'.
TypeError: Cannot read property 'split' of undefined
Thank you. @kevinlin311tw
We first resize images to 256x256, and then center-crop 227x227 as network input.
On 2017年7月28日, at 上午2:17, 111Moderato notifications@github.com wrote:
name: "KevinNet_CIFAR10" layers { layer { name: "data" type: "data" source: "cifar10_train_leveldb" meanfile: "../../data/ilsvrc12/imagenet_mean.binaryproto" batchsize: 32 cropsize: 227 mirror: true det_context_pad: 16 det_crop_mode: "warp" det_fg_threshold: 0.5 det_bg_threshold: 0.5 det_fg_fraction: 0.25 } top: "data" top: "label" }
In the train_CIFAR10_48.prototxt, why cropsize is 227? Is the size of cifar 32*32 ?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
Do you have an experiment on NUS-WIDE ? @kevinlin311tw
Yes. You can take a look the extension of this work here: https://arxiv.org/abs/1507.00101v2 Our workshop version can be seen as our proposed SSDH with the hyper parameters: alpha = 1, beta = 0, gamma = 0.
2017-09-05 22:02 GMT-07:00 111Moderato notifications@github.com:
Do you have an experiment on NUS-WIDE ? @kevinlin311tw https://github.com/kevinlin311tw
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kevinlin311tw/caffe-cvprw15/issues/22#issuecomment-327375329, or mute the thread https://github.com/notifications/unsubscribe-auth/AJgtMLoJ5dPSNUOnpX0R3ARU-3yxQBmMks5sfid_gaJpZM4OhMvK .
--
Best regards,
林可昀
Kevin Lin
Uh.. Is it correct for me to use SigmoidCrossEntropyLoss on multi-label dataset? @kevinlin311tw
Oh.. I forgot that NUS-WIDE is a multi-label dataset..
Yes. You are right. Since this workshop version is trained with softmax loss, we cannot directly apply it on multi-label dataset. In this case, since the label is the n-hot vector, we should use the loss function you mentioned.
2017-09-07 10:02 GMT-07:00 111Moderato notifications@github.com:
Uh.. Is it correct for me to use SigmoidCrossEntropyLoss on multi-label dataset? @kevinlin311tw https://github.com/kevinlin311tw
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/kevinlin311tw/caffe-cvprw15/issues/22#issuecomment-327861946, or mute the thread https://github.com/notifications/unsubscribe-auth/AJgtMPYJSnq4dfm_th_70-dnObyZFLRyks5sgCGrgaJpZM4OhMvK .
--
Best regards,
林可昀
Kevin Lin
@111Moderato I also meet this mistick. Do you fix it? Error Encountered Uncaught TypeError: Cannot read property 'replace' of undefined
Warning Can't infer network data shapes. Can't infer output shape of the 'undefined' layer of type 'undefined'. TypeError: Cannot read property 'split' of undefined
I forgot whether I had met the mistake. I'm using another version of Caffe, so I can run it properly after the prototxt changes
@111Moderato Thanks a lot! However, this error is occurred when using Netscope to view train_CIFAR10_48.prototxt. You asked this question, but author does not answered :)
The baidu link of model and image resource is invalid, can anyone share it?
After we execute the command
>> run_cifar10
, we get the>> MAP = 0.897373
Is this the result after reranking ? I mean the Fine-level Search. @kevinlin311tw