Previously in ContrastiveLoss, we would take the first n_contrastive samples (excluding the joint index) as the contrastive samples, and rely on shuffling between batches to provide randomness. However, this still leads to correlated contrastive samples within a batch. This pull request fixes that to instead sample the contrastive samples from the remainder of the batch. For convenience, we update MaximumLikelihoodLoss to also accept a key, for consistency with the updated ContrastiveLoss.
This could be a breaking change for users with a custom loss function used with fit_to_data, which now expects the loss function to allow passing of a key, but is trivial to fix by allowing passing of a key (even if ignored).
Previously in
ContrastiveLoss
, we would take the firstn_contrastive
samples (excluding the joint index) as the contrastive samples, and rely on shuffling between batches to provide randomness. However, this still leads to correlated contrastive samples within a batch. This pull request fixes that to instead sample the contrastive samples from the remainder of the batch. For convenience, we updateMaximumLikelihoodLoss
to also accept a key, for consistency with the updatedContrastiveLoss
.This could be a breaking change for users with a custom loss function used with
fit_to_data
, which now expects the loss function to allow passing of a key, but is trivial to fix by allowing passing of a key (even if ignored).