Closed ghost closed 4 years ago
Ah, I'm sorry. seems I've never explained about the figure in the readme. However, in the 198th line of main.py, there is a comment about the ground truth loss.
Since we have the actual ground truth labels, we can compute (not predict) the ground truth loss (i.e., cross-entropy-loss) without using the loss prediction module. So I wanted to figure out what happens if I switched the loss prediction module and the cross-entropy-loss in active learning cycles. Therefore I used the cross-entropy-loss to measure the uncertainty of each unlabeled sample and then collect the data points for the next cycle.
Strangely, the result is worse than that of the loss prediction module. An active learning process is improved but not because the loss prediction module predicts loss well but there might be other reasons. (if my experiment is not wrong)
Any other comments or further discussions are welcome.
Thanks for your reproduction.
In your reproduced image, there are four labels "Reference, Learn loss, Ground truth loss, and Random". I guess Reference is from the article, Learn loss is what you reproduced, and Random is of RandomSampling(I guess). What is the "Ground truth loss"?
Thanks in advance.