issues
search
facebookresearch
/
mae
PyTorch implementation of MAE https//arxiv.org/abs/2111.06377
Other
6.93k
stars
1.17k
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Implementing VIT Small in MAE
#150
bryanwong17
opened
1 year ago
1
pretrain error for import timm
#149
chos1npc
closed
1 year ago
2
How about fine-tuning with MAE auxiliary task?
#148
hellojialee
opened
1 year ago
0
[Question] Ablation of encoder with mask token
#147
DianCh
opened
1 year ago
0
[Question] Why non-masked patches look worse in pixel reconstruction example image
#146
austinmw
opened
1 year ago
1
Shouldn't the patch embeddings be trained only on the patches that survived masking? (Rather than the original image)
#145
Eduard6421
closed
1 year ago
5
does small batch size influence the training?
#144
NTUYWANG103
opened
1 year ago
0
Something about the training set
#143
Daniel12345abcde
opened
1 year ago
0
AttributeError("module {!r} has no attribute " AttributeError: module 'numpy' has no attribute 'float'
#142
apple2373
opened
1 year ago
0
How to get the visual pretrained model?
#141
NTUYWANG103
opened
1 year ago
0
What differences between 'mae_pretrain_vit_base.pth' and 'mae_visualize_vit_base.pth'
#140
irsLu
closed
1 year ago
1
Why not use RandomSampler instead of SequentialSampler for validation set?
#139
ucalyptus2
opened
1 year ago
6
should i do nb_classes=1 or 2 for binary classification finetuning task?
#138
ucalyptus2
opened
1 year ago
0
Fix the link to pretrained models in FINETUNE.md
#137
Hrant-Khachatrian
opened
1 year ago
2
Doesn't get the result of visual demo
#136
newuserforstudy
opened
1 year ago
1
A Question in Code:engine_pretrain.py#L39
#135
QiGuo1234567
opened
1 year ago
2
Command for multinode + main_pretrain.py
#134
kalyani7195
opened
1 year ago
0
GPU-dependence
#133
Daniel12345abcde
opened
1 year ago
0
Would it be able to train other forms of data
#132
Daniel12345abcde
closed
1 year ago
1
Does learnable position embedding work?
#131
ZK-Zhou
opened
1 year ago
0
Discrepancies in hyper-parameters between paper and README
#130
netw0rkf10w
opened
1 year ago
1
Accuracy and visualization of the Fine-tuning model
#129
glopes00
opened
1 year ago
0
How to fine-tune the pre-trained MAE model to do classification tasks
#128
ShellyShellyShellyShelly
opened
1 year ago
0
Add torch hub support
#127
materight
closed
1 year ago
0
RuntimeError: Given normalized_shape=[768], expected input with shape [*, 768], but got input of size[12]
#126
zhengzaidenglu
opened
1 year ago
6
Variable name mismatch
#125
ariG23498
opened
1 year ago
0
why this randmasking works in this implementation?
#124
FerryHuang
closed
1 year ago
0
what is the meaning of the parameter named "JOB_DIR"?
#123
HeLongHuang
opened
1 year ago
0
Pretraining the model while meeting "Signals.SIGKILL: 9" error
#122
wuxchcandoit
opened
1 year ago
3
missing strict=False in util.misc.load_model
#121
eminorhan
opened
1 year ago
0
Running multi-gpu on one node
#120
kaushikb258
opened
1 year ago
1
sbatch: unrecognized option '--gpus-per-node=8'
#119
xiangtaowong
closed
1 year ago
1
bad prediction results for unmasked pixels
#118
weiminson
opened
1 year ago
1
Hi, is mae_visualize_vit_base.pth pretrained on ImageNet-1K by self-supervised method?
#117
wjm-wjm
opened
1 year ago
1
Fine-tuning code and recipe for other downstream tasks
#116
Wallace-222
opened
1 year ago
0
Details of pre-training with dVAE tokens
#115
netw0rkf10w
opened
1 year ago
0
The usage of get_grad_norm_() function
#114
ZiboZ
opened
1 year ago
0
distributed training has the same speed as single gpu training
#113
ShihaoShao-GH
closed
1 year ago
6
Handlying different number of channels in 'patchify' and 'unpatchify'
#112
BrianPulfer
closed
8 months ago
2
Fake multi gpus pretraining
#111
Aurora-slz
opened
1 year ago
2
Finetune ViT-huge with 448 size
#110
liuxingbin
opened
1 year ago
0
1
#109
wuhenbai
closed
1 year ago
0
resize to non-squared images
#108
shuozhou
opened
1 year ago
4
Finetuning script for iNaturalist and Places classification tasks
#107
yookoon
opened
1 year ago
1
Implementation of submitit_pretrain.py
#106
wany1d1001
closed
1 year ago
2
The fine-tune accuracy is a little bit lower
#105
Wallace-222
closed
1 year ago
0
ImageNet-C numbers
#104
shoaibahmed
closed
1 year ago
1
MAE is available in HuggingFace Transformers (both PyTorch and TF)
#103
NielsRogge
opened
1 year ago
4
Pre-training settings on ImageNet-22k
#102
ZhichengHuang
opened
1 year ago
1
Request for TPU based implementation
#101
martinmamql
opened
1 year ago
0
Previous
Next