UKPLab / sentence-transformers

State-of-the-Art Text Embeddings
https://www.sbert.net
Apache License 2.0
15.47k stars 2.5k forks source link

Difference between `SentenceLabelDataset` and `GroupByLabelBatchSampler`? #2920

Open vibhas-singh opened 3 months ago

vibhas-singh commented 3 months ago

Hi @tomaarsen First of all - kudos to you for maintaining such an awesome and pragmatic library.

I am facing some difficulty on using GROUP_BY_LABEL batch sampler in v3.0 and want to highlight the issues to check if there is any way to mitigate those.

I went through the issues and found this: https://github.com/UKPLab/sentence-transformers/issues/2698#issuecomment-2144534374 You have mentioned here that the idea is to replace SentenceLabelDataset by GroupByLabelBatchSampler but I think there is very drastic differences between the two and we haven't retained the same functionality of SentenceLabelDataset while opting for GroupByLabelBatchSampler as a replacement.

I am taking an detailed example to explain the differences:

Let's take a simple example with a list of integers representing classes, and we'll use it to illustrate how the two approaches handle homogeneity in batch construction.

Example Data:

GroupByLabelBatchSampler Behavior:

Step 1: Initialization

Step 2: Batch Construction

SentenceLabelDataset Behavior:

Step 1: Initialization

Step 2: Batch Construction

Comparison of Homogeneity:

TL;DR:

I am trying to fine-tune sentence transformers models using the dataset with this label distribution:

Class 1: 5000 Samples
Class 2: 5000 Samples
Class 3: 3000 Samples
Class 5 to 50: Less than 50 samples each

In the new GroupByLabelBatchSampler the batching logic is yielding most of the batches as homogeneous and there is not much improvement observed after fine-tuning. IMO this type of data could have been easily used with SentenceLabelDataset as it ensures there is at max N samples from each label in a batch. Intuitively, ST models should benefit from having in-batch negatives and more heterogeneous batches.

Can you help me in veryfying if my understanding is correct and if yes, is there any way to opt for the older logic?

tomaarsen commented 3 months ago

Hello!

You're very right in your analysis: GroupByLabelBatchSampler was designed to replace SentenceLabelDataset, and the former is homogeneous whereas the latter is not so much. For reference, here is the docstring for the new GroupByLabelBatchSampler: https://github.com/UKPLab/sentence-transformers/blob/0a32ec8445ef46b2b5d4f81af4931e293d42623f/sentence_transformers/sampler.py#L40-L43

This sampler is meant for the Batch...TripletLoss classes, which require that each batch contains at least 2 examples per label class. These losses compare across all samples with the same label within the same batch, benefiting from 1) larger batches and 2) more samples with the same label in the each batch. At least, that is my understanding. As a result, in theory a more homogeneous batch should result in a better training signal for these losses. However, I admit that I haven't tested it out in practice, and I may be wrong.

Is there any way to opt for the older logic?

Yes, and no. You can override the Trainer's get_batch_sampler: https://github.com/UKPLab/sentence-transformers/blob/0a32ec8445ef46b2b5d4f81af4931e293d42623f/sentence_transformers/trainer.py#L459-L466

And replace it with a function that immediately returns a custom Batch Sampler which has your desired behaviour. So yes: you can use the older logic, but no: you'd have to write it yourself.

Hope this helps a bit.