Open arthur-thuy opened 2 months ago
That's an interesting idea. Would you have some references we could look into it?
It's similar to semi-supervised learning which we have investigated but not really maintained. It became a bit unwieldy in the codebase, but I think we can give it a second try :)
This is a general survey of self-training: Self-Training: A Survey
The survey focuses on more traditional approaches to semi-supervised learning for the acquisition function, such as data clustering and density estimation. It does not discuss uncertainty estimation with approximate Bayesian techniques but is useful to see how self-training aligns with active learning.
Recent approches use self-trainining and epistemic uncertainty for unsupervised domain adaptation (UDA). The UDA methods attempt to reduce the domain shift by adding examples of the shifted target domain to the training set, using pseudo-labels as ground truth labels are often not available.
_Explore Epistemic Uncertainty in Domain Adaptive Semantic Segmentation_
_Feature Alignment by Uncertainty and Self-Training for Source-Free Unsupervised Domain Adaptation_\ or arXiv here
_Generative Self-training for Cross-Domain Unsupervised Tagged-to-Cine MRI Synthesis_\ or arXiv here
I think that the implementation and maintenance would be quite limited as self-training is so similar to active learning (less work than the pi-model attempted earlier).
Describe the solution you'd like It would be nice to have an option for self-training. Self-training is related to active learning but gets labels for queries based on its predictions instead of asking an oracle for the ground truth label. As such, only ground truth labels are required for the initial labeled set.
Intuitively, it would make sense to add an argument to the
ActiveLearningLoop
object.