Mephisto405 / Learning-Loss-for-Active-Learning

Reproducing experimental results of LL4AL [Yoo et al. 2019 CVPR]
215 stars 50 forks source link

What is the "ground truth loss" in your reproduced image? #1

Closed ghost closed 4 years ago

ghost commented 5 years ago

Thanks for your reproduction.

In your reproduced image, there are four labels "Reference, Learn loss, Ground truth loss, and Random". I guess Reference is from the article, Learn loss is what you reproduced, and Random is of RandomSampling(I guess). What is the "Ground truth loss"?

Thanks in advance.

Mephisto405 commented 5 years ago

Ah, I'm sorry. seems I've never explained about the figure in the readme. However, in the 198th line of main.py, there is a comment about the ground truth loss.

Since we have the actual ground truth labels, we can compute (not predict) the ground truth loss (i.e., cross-entropy-loss) without using the loss prediction module. So I wanted to figure out what happens if I switched the loss prediction module and the cross-entropy-loss in active learning cycles. Therefore I used the cross-entropy-loss to measure the uncertainty of each unlabeled sample and then collect the data points for the next cycle.

Strangely, the result is worse than that of the loss prediction module. An active learning process is improved but not because the loss prediction module predicts loss well but there might be other reasons. (if my experiment is not wrong)

Any other comments or further discussions are welcome.