Fit current timm register_model and to_2tuple import, and fix pos_embed shape when using 384. When using:
sys.path.append('../pytorch-image-models/')
import torch
from models import inception_transformer
tt = inception_transformer.iformer_small_384(pretrained=True)
met 3 errors:
ImportError: cannot import name 'register_model' from 'timm.models.registry'
ModuleNotFoundError: No module named 'timm.models.layers.helpers'
RuntimeError: Error(s) in loading state_dict for InceptionTransformer:
size mismatch for pos_embed1: copying a param with shape torch.Size([1, 56, 56, 96]) from checkpoint, the shape in current model is torch.Size([1, 96, 96, 96]).
size mismatch for pos_embed2: copying a param with shape torch.Size([1, 28, 28, 192]) from checkpoint, the shape in current model is torch.Size([1, 48, 48, 192]).
size mismatch for pos_embed3: copying a param with shape torch.Size([1, 14, 14, 320]) from checkpoint, the shape in current model is torch.Size([1, 24, 24, 320]).
size mismatch for pos_embed4: copying a param with shape torch.Size([1, 7, 7, 384]) from checkpoint, the shape in current model is torch.Size([1, 12, 12, 384]).
For the first 2, adds a try except block fitting current newest timm imports.
For the 3rd one, I think it should use 224 calculating the first num_patches for any input image size.
Fit current timm
register_model
andto_2tuple
import, and fixpos_embed
shape when using 384. When using:met 3 errors:
try except
block fitting current newest timm imports.224
calculating the firstnum_patches
for any input image size.