maciejkula / spotlight

Deep recommender models using PyTorch.
MIT License
3k stars 428 forks source link

why don't we need to take logarithm in pointwise_loss? #184

Open liyunrui opened 3 years ago

liyunrui commented 3 years ago

My question is I'm thinking is there any reason we can simplify cross entropy loss into the below way instead of what [1] used in cross-entropy.

def pointwise_loss(positive_predictions, negative_predictions, mask=None):
    """
    Logistic loss function.
    Parameters
    ----------
    positive_predictions: tensor
        Tensor containing predictions for known positive items.
    negative_predictions: tensor
        Tensor containing predictions for sampled negative items.
    mask: tensor, optional
        A binary tensor used to zero the loss from some entries
        of the loss tensor.
    Returns
    -------
    loss, float
        The mean value of the loss function.
    """

    positives_loss = (1.0 - torch.sigmoid(positive_predictions))
    negatives_loss = torch.sigmoid(negative_predictions)

    loss = (positives_loss + negatives_loss)

    if mask is not None:
        mask = mask.float()
        loss = loss * mask
        return loss.sum() / mask.sum()

    return loss.mean()

[1].https://ml-cheatsheet.readthedocs.io/en/latest/logistic_regression.html