UKPLab / sentence-transformers

Multilingual Sentence & Image Embeddings with BERT
https://www.SBERT.net
Apache License 2.0
14.34k stars 2.39k forks source link

Adding Improve contrastive loss #2774

Open imrankh46 opened 1 week ago

imrankh46 commented 1 week ago

Hy @tomaarsen . I just create a new issues. I implement the custom Improve contrastive loss using this paper. https://arxiv.org/abs/2308.03281

So my question, is the loss are already implemented in the sentence transformer library or not?

tomaarsen commented 1 week ago

Hello!

This loss very closely resembles MultipleNegativesSymmetricRankingLoss. For reference, MultipleNegativesRankingLoss is equivalent to the InfoNCE, and MultipleNegativesSymmetricRankingLoss is like MultipleNegativesRankingLoss but is bi-directional and trains with in-batch queries as well as the normal in-batdh documents.

My only hesitation is the formatting of the paper's contrastive loss function: image

The MultipleNegativesSymmetricRankingLoss implementation uses two separate Cross Entropy losses (one for "given query, can you find the positive document between all documents" and one for "given document, can you find the positive query between all queries") which it then averages. The function from the paper seems to only use one Cross Entropy call, so I'm not 100% sure if they're identical, but I think they likely are.

imrankh46 commented 1 week ago

@tomaarsen thanks for the clarification. What if, I would like to add custom loss? Can I add if yes so how I can do this ?

tomaarsen commented 1 week ago

Yes, you can! There is a list with requirements for custom loss functions here: https://sbert.net/docs/sentence_transformer/loss_overview.html#custom-loss-functions

You can look at the losses from here and here as inspiration for losses.

imrankh46 commented 1 week ago

Yes, you can! There is a list with requirements for custom loss functions here: https://sbert.net/docs/sentence_transformer/loss_overview.html#custom-loss-functions

You can look at the losses from here and here as inspiration for losses.

  • Tom Aarsen

its help a lot. thank you tom :)

imrankh46 commented 1 week ago

@tomaarsen hello tom! I just implement the custom loss. after running trainer.train() so it is showing the training loss 0.000. can you review the code? here is the code.

from typing import Any, Dict, Iterable
import torch
from torch import nn
from sentence_transformers import SentenceTransformer, util

class ImprovedContrastiveLoss(nn.Module):
    def __init__(self, model: SentenceTransformer, temperature: float = 0.01):
        super(ImprovedContrastiveLoss, self).__init__()
        self.model = model
        self.temperature = temperature

    def forward(self, sentence_features: Iterable[Dict[str, torch.Tensor]], labels: torch.Tensor = None) -> torch.Tensor:
        # Get the embeddings for each sentence in the batch
        embeddings = [self.model(sentence_feature)['sentence_embedding'] for sentence_feature in sentence_features]
        query_embeddings = embeddings[0]
        doc_embeddings = embeddings[1]

        # Compute similarity scores
        similarity_q_d = util.cos_sim(query_embeddings, doc_embeddings)
        similarity_q_q = util.cos_sim(query_embeddings, query_embeddings)
        similarity_d_d = util.cos_sim(doc_embeddings, doc_embeddings)

        # Compute the partition function
        exp_sim_q_d = torch.exp(similarity_q_d / self.temperature)
        exp_sim_q_q = torch.exp(similarity_q_q / self.temperature)
        exp_sim_d_d = torch.exp(similarity_d_d / self.temperature)

        # Ensure the diagonal is not considered in negative samples
        mask = torch.eye(similarity_q_d.size(0), device=similarity_q_d.device).bool()
        exp_sim_q_q = exp_sim_q_q.masked_fill(mask, 0)
        exp_sim_d_d = exp_sim_d_d.masked_fill(mask, 0)

        partition_function = exp_sim_q_d.sum(dim=1) + exp_sim_q_q.sum(dim=1) + exp_sim_d_d.sum(dim=1)

        # Compute the loss
        loss = -torch.log(exp_sim_q_d.diag() / partition_function).mean()
        return loss

    def get_config_dict(self) -> Dict[str, Any]:
        return {"temperature": self.temperature}

inner_loss_function = ImprovedContrastiveLoss(model)
imrankh46 commented 1 week ago

@tomaarsen kindly review the custom loss code too. 🤗

tomaarsen commented 1 week ago

Your code looks pretty solid I think, but I think you're missing one thing:

partition_function = exp_sim_q_d.sum(dim=1) + exp_sim_q_q.sum(dim=1) + exp_sim_d_d.sum(dim=1)

I believe this is only 3 of the 4 terms of the partition function.

and then we're missing $$\sum_j{e^{s(q_j,d_i)/\tau}}$$ I think this is equivalent to exp_sim_q_d.sum(dim=0).

So:

partition_function = exp_sim_q_d.sum(dim=0) + exp_sim_q_d.sum(dim=1) + exp_sim_q_q.sum(dim=1) + exp_sim_d_d.sum(dim=0)

That said, the real issue is that torch.exp(similarity / 0.01) with similarity between -1 and 1, then we get torch.exp(-100) to torch.exp(100):

>>> torch.tensor(100).exp()
tensor(inf)
>>> torch.tensor(-100).exp()
tensor(3.7835e-44)

The first is the big issue: you're getting an overflow to inf. If you set the temperature to 1 then you can see that it doesn't have a loss of 0.0 anymore.

In short, you have to implement some clever tricks to avoid the overflow, e.g. see: https://gregorygundersen.com/blog/2020/02/09/log-sum-exp/

tomaarsen commented 1 week ago

I went and did the math, and it turns out that because we divide the exp_sim_q_d.diag() with the partition function, we can subtract some constant from the s(q, d) / tau both above and below the division and get equivalent results. So, rather than e.g. $$\sum_j{e^{s(q_i,d_j)/\tau}}$$ we do $$\sum_j{e^{s(q_i,d_j)/\tau - c}}$$

We can set c = 1 / tau so that s(q_i,d_j)/\tau - c} ranges between -2 / tau (-200) and 0 rather than -1 / tau (-100) and 1 / tau (100). This prevents overflow, because then the highest value is exp(0) = 1. The only remaining "issue" is that you'll get underflow instead: a cosine similarity of -1 now results in exp(-200) by default, which is exactly 1.4e-87 (and underflows to 0.0 in torch).

The final class is then:

class ImprovedContrastiveLoss(nn.Module):
    def __init__(self, model: SentenceTransformer, temperature: float = 0.01):
        super(ImprovedContrastiveLoss, self).__init__()
        self.model = model
        self.temperature = temperature

    def forward(self, sentence_features: Iterable[Dict[str, torch.Tensor]], labels: torch.Tensor = None) -> torch.Tensor:
        # Get the embeddings for each sentence in the batch
        embeddings = [self.model(sentence_feature)['sentence_embedding'] for sentence_feature in sentence_features]
        query_embeddings = embeddings[0]
        doc_embeddings = embeddings[1]

        # Compute similarity scores
        similarity_q_d = util.cos_sim(query_embeddings, doc_embeddings)
        similarity_q_q = util.cos_sim(query_embeddings, query_embeddings)
        similarity_d_d = util.cos_sim(doc_embeddings, doc_embeddings)

        # Move the similarity range from [-1, 1] to [-2, 0] to avoid overflow
        similarity_q_d = similarity_q_d - 1
        similarity_q_q = similarity_q_q - 1
        similarity_d_d = similarity_d_d - 1

        # Compute the partition function
        exp_sim_q_d = torch.exp(similarity_q_d / self.temperature)
        exp_sim_q_q = torch.exp(similarity_q_q / self.temperature)
        exp_sim_d_d = torch.exp(similarity_d_d / self.temperature)

        # Ensure the diagonal is not considered in negative samples
        mask = torch.eye(similarity_q_d.size(0), device=similarity_q_d.device).bool()
        exp_sim_q_q = exp_sim_q_q.masked_fill(mask, 0)
        exp_sim_d_d = exp_sim_d_d.masked_fill(mask, 0)

        partition_function = exp_sim_q_d.sum(dim=1) + exp_sim_q_d.sum(dim=0) + exp_sim_q_q.sum(dim=1) + exp_sim_d_d.sum(dim=0)

        # Compute the loss
        loss = -torch.log(exp_sim_q_d.diag() / partition_function).mean()
        return loss

    def get_config_dict(self) -> Dict[str, Any]:
        return {"temperature": self.temperature}

I'll run some tests to see how this performs.

imrankh46 commented 1 week ago

@tomaarsen so should I need to solve the underflow issues too ? The blog that you share are so great. I really need such types of blog that target maths and code.

tomaarsen commented 1 week ago

So it's looking like the ICL isn't a notable improvement, at least for my example training script with natural-questions on mpnet-base:

import random
import logging
from datasets import load_dataset, Dataset
from sentence_transformers import (
    SentenceTransformer,
    SentenceTransformerTrainer,
    SentenceTransformerTrainingArguments,
    SentenceTransformerModelCardData,
)
from typing import Any, Dict, Iterable
import torch
from torch import nn
from sentence_transformers.losses import MultipleNegativesRankingLoss, MultipleNegativesSymmetricRankingLoss
from sentence_transformers import util
from sentence_transformers.training_args import BatchSamplers
from sentence_transformers.evaluation import InformationRetrievalEvaluator

logging.basicConfig(
    format="%(asctime)s - %(message)s", datefmt="%Y-%m-%d %H:%M:%S", level=logging.INFO
)

# 1. Load a model to finetune with 2. (Optional) model card data
model = SentenceTransformer(
    "microsoft/mpnet-base",
    model_card_data=SentenceTransformerModelCardData(
        language="en",
        license="apache-2.0",
        model_name="MPNet base trained on Natural Questions pairs",
    ),
)
model_name = "mpnet-base-natural-questions-icl"

# 3. Load a dataset to finetune on
dataset = load_dataset("sentence-transformers/natural-questions", split="train")
dataset = dataset.add_column("id", range(len(dataset)))
train_dataset: Dataset = dataset.select(range(90_000))
eval_dataset: Dataset = dataset.select(range(90_000, len(dataset)))

# 4. Define a loss function
class ImprovedContrastiveLoss(nn.Module):
    def __init__(self, model: SentenceTransformer, temperature: float = 0.01):
        super(ImprovedContrastiveLoss, self).__init__()
        self.model = model
        self.temperature = temperature

    def forward(self, sentence_features: Iterable[Dict[str, torch.Tensor]], labels: torch.Tensor = None) -> torch.Tensor:
        # Get the embeddings for each sentence in the batch
        embeddings = [self.model(sentence_feature)['sentence_embedding'] for sentence_feature in sentence_features]
        query_embeddings = embeddings[0]
        doc_embeddings = embeddings[1]

        # Compute similarity scores
        similarity_q_d = util.cos_sim(query_embeddings, doc_embeddings)
        similarity_q_q = util.cos_sim(query_embeddings, query_embeddings)
        similarity_d_d = util.cos_sim(doc_embeddings, doc_embeddings)

        # Move the similarity range from [-1, 1] to [-2, 0] to avoid overflow
        similarity_q_d = similarity_q_d - 1
        similarity_q_q = similarity_q_q - 1
        similarity_d_d = similarity_d_d - 1

        # Compute the partition function
        exp_sim_q_d = torch.exp(similarity_q_d / self.temperature)
        exp_sim_q_q = torch.exp(similarity_q_q / self.temperature)
        exp_sim_d_d = torch.exp(similarity_d_d / self.temperature)

        # Ensure the diagonal is not considered in negative samples
        mask = torch.eye(similarity_q_d.size(0), device=similarity_q_d.device).bool()
        exp_sim_q_q = exp_sim_q_q.masked_fill(mask, 0)
        exp_sim_d_d = exp_sim_d_d.masked_fill(mask, 0)

        partition_function = exp_sim_q_d.sum(dim=1) + exp_sim_q_d.sum(dim=0) + exp_sim_q_q.sum(dim=1) + exp_sim_d_d.sum(dim=0)

        # Compute the loss
        loss = -torch.log(exp_sim_q_d.diag() / partition_function).mean()
        return loss

    def get_config_dict(self) -> Dict[str, Any]:
        return {"temperature": self.temperature}

loss = ImprovedContrastiveLoss(model)
# loss = MultipleNegativesSymmetricRankingLoss(model)

# 5. (Optional) Specify training arguments
args = SentenceTransformerTrainingArguments(
    # Required parameter:
    output_dir=f"models/{model_name}",
    # Optional training parameters:
    num_train_epochs=1,
    per_device_train_batch_size=32,
    per_device_eval_batch_size=32,
    learning_rate=2e-5,
    warmup_ratio=0.1,
    fp16=False,  # Set to False if you get an error that your GPU can't run on FP16
    bf16=True,  # Set to True if you have a GPU that supports BF16
    batch_sampler=BatchSamplers.NO_DUPLICATES,  # MultipleNegativesRankingLoss benefits from no duplicate samples in a batch
    # Optional tracking/debugging parameters:
    eval_strategy="steps",
    eval_steps=100,
    save_strategy="steps",
    save_steps=100,
    save_total_limit=2,
    logging_steps=100,
    logging_first_step=True,
    run_name=model_name,  # Will be used in W&B if `wandb` is installed
)

# 6. (Optional) Create an evaluator & evaluate the base model
# The full corpus, but only the evaluation queries
queries = dict(zip(eval_dataset["id"], eval_dataset["query"]))
corpus = {cid: dataset[cid]["answer"] for cid in range(20_000)} | {cid: dataset[cid]["answer"] for cid in eval_dataset["id"]}
relevant_docs = {qid: {qid} for qid in eval_dataset["id"]}
dev_evaluator = InformationRetrievalEvaluator(
    corpus=corpus,
    queries=queries,
    relevant_docs=relevant_docs,
    show_progress_bar=True,
    name="natural-questions-dev",
)
dev_evaluator(model)

# 7. Create a trainer & train
trainer = SentenceTransformerTrainer(
    model=model,
    args=args,
    train_dataset=train_dataset.remove_columns("id"),
    eval_dataset=eval_dataset.remove_columns("id"),
    loss=loss,
    evaluator=dev_evaluator,
)
trainer.train()

# (Optional) Evaluate the trained model on the evaluator after training
dev_evaluator(model)

# 8. Save the trained model
model.save_pretrained(f"models/{model_name}/final")

# 9. (Optional) Push it to the Hugging Face Hub
model.push_to_hub(f"{model_name}")
imrankh46 commented 1 week ago

@tomaarsen great 👍. I will play with it.