Closed Lorenzobattistela closed 1 year ago
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
View this failed invocation of the CLA check for more information.
For the most up to date status, view the checks section at the bottom of the pull request.
Thanks for the PR! Looking forward to adding this to the tfsim losses. I left some comments on the loss function.
@owenvallis implemented the changes requested, although i'm a little lost about the following snippet:
# Get negative distances
negative_dists, _ = negative_distances(
negative_mining_strategy, pairwise_distances, negative_mask
)
since both of results are not being used anywhere else
@owenvallis implemented the changes requested, although i'm a little lost about the following snippet:
# Get negative distances negative_dists, _ = negative_distances( negative_mining_strategy, pairwise_distances, negative_mask )
since both of results are not being used anywhere else
Good point! Looks like we're missing the columns-wise concat in the logsumexp. You had it in the previous commit, so pulling that back in I think it should be:
# Reorder pairwise distances and negative mask based on positive indices
reordered_pairwise_distances = tf.gather(negative_distances, positive_indices, axis=1)
reordered_negative_mask = tf.gather(negative_mask, positive_indices, axis=1)
# Concatenate pairwise distances and negative masks along axis=1
concatenated_distances = tf.concat([pairwise_distances, reordered_pairwise_distances], axis=1)
concatenated_negative_mask = tf.concat([negative_mask, reordered_negative_mask], axis=1)
# Compute (margin - neg_dist) logsum_exp values for each row (equation 4 in the paper)
neg_logsumexp = tfsim_losses.utils.logsumexp(margin - concatenated_distances, concatenated_negative_mask)
@owenvallis implemented changes discussed, and also fixed for flake8
, I'll try writing unit tests for the loss too. ALready subscribed for CLA/google action too
Thanks! Looks like the CLA is working for the current commits, but you will need to update the author on the older commit.
Let me know once the tests are ready and I'll run the checks and merge the PR.
Also I keep getting this error when writing tests:
FAILED tests/losses/test_lifted_structure_loss.py::TestLiftedStructLoss::test_lifted_struct_loss_test_mode_eager - TypeError: lifted_struct_loss() got multiple values for argument 'distance' FAILED tests/losses/test_lifted_structure_loss.py::TestLiftedStructLoss::test_lifted_struct_loss_test_mode_graph - TypeError: in user code:
for a test like this (just example one, not the correct):
import tensorflow as tf
from absl.testing import parameterized
from tensorflow.python.framework import combinations
from tensorflow.keras.losses import Reduction
from tensorflow_similarity import losses
from . import utils
@combinations.generate(combinations.combine(mode=["graph", "eager"]))
class TestLiftedStructLoss(tf.test.TestCase, parameterized.TestCase):
def test_config(self):
lifted_obj = losses.LiftedStructLoss(
reduction=Reduction.SUM,
name="lifted_loss",
distance="cosine",
)
self.assertEqual(lifted_obj.distance.name, "cosine")
self.assertEqual(lifted_obj.name, "lifted_loss")
self.assertEqual(lifted_obj.reduction, Reduction.SUM)
def test_lifted_struct_loss(self):
"""Tests the LiftedStructLoss with different parameters."""
labels = tf.constant([1, 2, 2, 1], dtype=tf.int32)
embeddings = tf.random.normal([4, 10])
expected_loss = 1
lifted_obj = losses.LiftedStructLoss(reduction=Reduction.SUM)
loss = lifted_obj(labels, embeddings)
self.assertAlmostEqual(loss, expected_loss)
Interesting, I'll have to look into that more. Maybe the kwargs also contain the distance and that is getting passed twice in the init call or something? I'll try and take a look at it tomorrow if I get a chance.
havent thought about that, will try to do some stuff too
Had some progress today, we have some problems in the implementation and the distance thing was missing arguments that were coming from call
. Working on it, expect to push some things tomorrow
ops closed sorry, did not meant to lol
@owenvallis just implemented some sample tests, now they're running fine, errors are fixed although the loss values are really weird, so im not sure they are correct. Also did not finish to implement test cases.
Check out this pull request on
See visual diffs & provide feedback on Jupyter Notebooks.
Powered by ReviewNB
new pr #342
This PR refers to Issue #102 and it is implementing the LSL described in the article https://arxiv.org/pdf/1511.06452.pdf