-
Thanks for the great paper and code! I have a query about the moco v3 encoder- in the paper it mentions the latent representations are regularized on a hyper-sphere. I am fairly new to moco v3, can yo…
-
Impressive work, thank you for sharing! I have now reproduced most of the experiments; additionally, I plan to fine-tune on a small-scale dataset. Could you provide the moco-v3-large code? I would be …
xbyym updated
7 months ago
-
May I ask the 133-134 lines of code in ./moco-v3/moco/builder.py:
loss = self.contrastive_loss(q1[:N], k2[:N]) + self.contrastive_loss(q1[:N], k2[N:]) + self.contrastive_loss(q1[N:], k2[:N]) + self.c…
-
### 分支
main 分支 (mmpretrain 版本)
### 描述该错误
I'm trying to reproduce the results of MOCOv3 based on the configuration file `resnet50_8xb128-linear-coslr-90e_in1k.py` from the repository: https://github…
-
Heyy,
I wanted to share my observations regarding the correct usage of moco v3 as a backbone, as reported by the moco official repository, as well as a potential inconsistency in the repository.
…
-
When I attempt to pre-train moco v3's vit_small model, I run into the following bug:
`raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'VisionTransformerMoCo' object…
-
### Links
- Paper : https://arxiv.org/abs/2206.07692
- Github : https://github.com/OliverRensu/SDMP
### 한 줄 요약
- Self-supervised learning 분야에서 Mixup, Cutmix, ResizeMix와 유사한 data mixing augmentat…
-
Sik-Ho Tang. [Review — MoCo v3: An Empirical Study of Training Self-Supervised Vision Transformers](https://sh-tsang.medium.com/review-moco-v3-an-empirical-study-of-training-self-supervised-vision-tra…
-
Populate the name of the snippet based on selected fields. Example names of real life snippets
- Moco_global_2018_Profile Age 3-5 Weeks_campaign_IRL Generic v2_Stream_EN_Rel
- Moco_global_2017_c…
-
I noticed that a new projector head MLP is added after loading the pre-trained MoCo v3 model. However, the parameters of this newly added component are also set to requires_grad=False.
My question …