ozansener / active_learning_coreset

Source code for ICLR 2018 Paper: Active Learning for Convolutional Neural Networks: A Core-Set Approach
MIT License
259 stars 41 forks source link

How to select upper bound and delta? #4

Closed jongchyisu closed 5 years ago

jongchyisu commented 5 years ago

HI, I'm wondering what's the value you used for upper bound and delta in your experiments. I only found the upper bound \Chi=1e-4*n in the paper. Is this the UB used in the code? How about the delta? Thank you very much!

ozansener commented 5 years ago

Upper bound is the output of the greedy search. You first run the greedy search then you use its value as the upper bound. Delta is also computed as ub+lb/2 as explained in the paper.

jongchyisu commented 5 years ago

Thanks for your reply! Another question, in the paper you mentioned "activations of the final fc layers" are used as the feature. I just want to make sure that is this the logits (10 dimension as in SVHN) instead of the feature from the penultimate layer?

ozansener commented 5 years ago

Sorry, we meant the penultimate layer. It is the layer before the softmax

jongchyisu commented 5 years ago

So it is logits (10 dimension as in SVHN). Thanks.

ozansener commented 5 years ago

No, it is the penultimate layer. Layer before the softmax. If you refer to here: http://torch.ch/blog/2015/07/30/cifar.html It is the 512 dimensional output before nn.Linear(512,10)