This is the simple but great idea on improving performance of ViT. But I got an error while I'm exploring the code.
I tried to reproduce the MAE evaluation results on Imagenet-1k with vit_base_patch16 using official code of MAE , while I apply tome patch to models_vit.py as mentioned in given example, but it gave error as model is none. So I changed from this,
model = tome.patch.timm(model, prop_attn=False)
to this,
tome.patch.timm(model, prop_attn=False)
I have done this because I already worked with this code on given benchmark. But it gave the new error
File "/nfs/users/ext_vignagajan.vigneswaran/ToMe/experiments/models/mae/main_finetune.py", line 360, in
main(args)
File "/nfs/users/ext_vignagajan.vigneswaran/ToMe/experiments/models/mae/main_finetune.py", line 307, in main
test_stats = evaluate(data_loader_val, model, device)
File "/nfs/users/ext_vignagajan.vigneswaran/miniconda3/envs/tome/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, kwargs)
File "/nfs/users/ext_vignagajan.vigneswaran/ToMe/experiments/models/mae/engine_finetune.py", line 118, in evaluate
output = model(images)
File "/nfs/users/ext_vignagajan.vigneswaran/miniconda3/envs/tome/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, *kwargs)
File "/nfs/users/ext_vignagajan.vigneswaran/ToMe/tome/patch/timm.py", line 114, in forward
return super().forward(args, kwdargs)
File "/nfs/users/ext_vignagajan.vigneswaran/miniconda3/envs/tome/lib/python3.10/site-packages/timm/models/vision_transformer.py", line 347, in forward
x = self.forward_features(x)
File "/nfs/users/ext_vignagajan.vigneswaran/miniconda3/envs/tome/lib/python3.10/site-packages/timm/models/vision_transformer.py", line 340, in forward_features
x = self.norm(x)
File "/nfs/users/ext_vignagajan.vigneswaran/miniconda3/envs/tome/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1207, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'ToMeVisionTransformer' object has no attribute 'norm'
I explored the code but I didn't able to fix it. Can you help me through this to fix that error?
Hi,
This is the simple but great idea on improving performance of ViT. But I got an error while I'm exploring the code.
I tried to reproduce the MAE evaluation results on Imagenet-1k with vit_base_patch16 using official code of MAE , while I apply tome patch to models_vit.py as mentioned in given example, but it gave error as model is none. So I changed from this,
to this,
I have done this because I already worked with this code on given benchmark. But it gave the new error
I explored the code but I didn't able to fix it. Can you help me through this to fix that error?