KevinMusgrave / pytorch-metric-learning

The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.
https://kevinmusgrave.github.io/pytorch-metric-learning/
MIT License
5.95k stars 657 forks source link

Example request : VICRegLoss #461

Open stereomatchingkiss opened 2 years ago

stereomatchingkiss commented 2 years ago

Any plan to include example of this loss function? Thanks

KevinMusgrave commented 2 years ago

Do you mean a jupyter notebook showing how to use VICRegLoss?

stereomatchingkiss commented 2 years ago

Do you mean a jupyter notebook showing how to use VICRegLoss?

Yes, is it possible? Hard to know how to use it just look at the doc.

KevinMusgrave commented 2 years ago

It's possible, but might be lower priority. I'll leave this issue open anyway in case someone wants to make a pull request to add an example notebook.

I think you'll find the official paper implementation helpful: https://github.com/facebookresearch/vicreg

TKassis commented 1 year ago

Thank you @KevinMusgrave for such a masterpiece of a library. I have a question relating to the VICRegLoss which I think is what is confusing @stereomatchingkiss as well. I read the paper but I'm a bit confused about it's implementation in PML. Why does the loss not follow the typical (embeddings, labels) arguments like most of the other losses. What is the reference embedding? Why can't that just be a label referring to another embedding?

KevinMusgrave commented 1 year ago

Thanks @TKassis 😄

When the implementation was discussed (#372) it seemed like this loss would only ever be used as shown in Figure 1 of the paper:

image

So embeddings and ref_emb are always for the same set of data, but with different data augmentations.

That said, I would prefer if VICRegLoss followed the same format as the other loss functions. I've created a separate issue for this task now (#560).