According to the original paper:
If we can predict the loss of a data point, it becomes possible to select data points that are expected to have high losses. The selected data points would be more informative to the current model.
This implies we should be taking the maximum values from the uncertainties arg[-n:]. Your current implementation returns the minimum arg[:n] and so is returning the least informative points.
https://github.com/cure-lab/deep-active-learning/blob/57aaaf3d3b166ac8919ddd774556aac3ec2676e3/query_strategies/learning_loss_for_al.py#LL281C1-L281C39
According to the original paper: If we can predict the loss of a data point, it becomes possible to select data points that are expected to have high losses. The selected data points would be more informative to the current model.
This implies we should be taking the maximum values from the uncertainties
arg[-n:]
. Your current implementation returns the minimumarg[:n]
and so is returning the least informative points.