-
Thank you for releasing this wonderful code. I am confused with the implementation of the contrastive learning part. In the paper, it mentioned the negatively paired patches whether come from an image…
-
@nuneslu Thanks for sharing the paper and code.
As we know, in recent years, self-supervised learning on 3D point cloud attracts more and more investigation. PointContrast and DepthContrast have be…
-
I noted that the weights of the teacher network are updated every epoch? Usually, we update the teacher model every iteration. Why do authors choose such a strategy?
https://github.com/Vibashan/ir…
-
1. My GPUs are two NVIDIA GeForce RTX 3080 with memory 10G each. When training the pegasus model, the dataset can be loaded, but after that CUDA is out of memory even if the batch size is set to be 1.…
-
- CVPR2021
- [arXiv](https://arxiv.org/abs/2104.00287)
-
Hi, I just wonder that what is the difference between proposed DIB and conventional contrastive learning? It seems that they both make representations undistinguishable within the class and distinguis…
-
PeCLR: Self-Supervised 3D Hand Pose Estimation from monocular RGB viaContrastive Learning
-
In the original paper, self-predictive representations is used as a self-supervised source of supervision for the latent. This allows it to be used for pretraining the vision head rather than using th…
-
Hi @JohnGiorgi,
In your notebook [training.ipynb](https://github.com/JohnGiorgi/DeCLUTR/blob/master/notebooks/training.ipynb), you do not use a validation dataset. Why? This is mandatory when train…
piegu updated
2 years ago
-
Any insight on how to take the image/text embeddings (or nominal model forward output) to achieve a simple similarity score as done in the huggingface implementation? [HF example here](https://hugging…