issues
search
ml-jku
/
MIM-Refiner
A Contrastive Learning Boost from Intermediate Pre-Trained Representations
MIT License
34
stars
3
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
train batch_size
#9
haribaskarsony
closed
3 days ago
4
Stage 2 refining does not update weights of encoder.
#8
haribaskarsony
closed
1 week ago
3
model initializer code
#7
haribaskarsony
closed
1 month ago
1
using custom model
#6
haribaskarsony
closed
1 month ago
16
use of no_aug_train set
#5
haribaskarsony
closed
1 month ago
1
license file
#4
at-ima
closed
6 months ago
1
how to train stage2 with cifar100?
#3
abebe9849
closed
6 months ago
1
What if we use using both the pretraining objective (e.g. Mask image prediction loss in MAE) and the NNA loss to pre-train a model?
#2
GT9505
closed
6 months ago
4
How to evaluate the performance of clustering based on feature extracted by pretrained model on my own dataset?
#1
gongjizhang
closed
6 months ago
2