huggingface / pytorch-image-models

The largest collection of PyTorch image encoders / backbones. Including train, eval, inference, export scripts, and pretrained weights -- ResNet, ResNeXT, EfficientNet, NFNet, Vision Transformer (ViT), MobileNetV4, MobileNet-V3 & V2, RegNet, DPN, CSPNet, Swin Transformer, MaxViT, CoAtNet, ConvNeXt, and more
https://huggingface.co/docs/timm
Apache License 2.0
32.34k stars 4.76k forks source link

[BUG] MobileNetV3 doesn't work from timm version 0.6.. #1406

Closed JeonghwaYoo-R closed 2 years ago

JeonghwaYoo-R commented 2 years ago

Describe the bug When I update timm from 0.5.4 to latest version, I encountered error. I only used forward_features from timm, then append some custom module.

image The number of class is 8.

When I roll back the version to 0.5.4, no error occured.

I used mobilenetv3_large_100_miil_in21k

To Reproduce Steps to reproduce the behavior:

  1. Update the timm to 0.6.xx
  2. Use MobileNet model (mobilenetv3_large_100_miil_in21k)

Expected behavior A clear and concise description of what you expected to happen.

Screenshots If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

Additional context Add any other context about the problem here.

rwightman commented 2 years ago

@JeonghwaYoo-R in 0.6 I altered forward_features, added forward_head to be more consistent across all model types, forward_features now breaks before final head layers and global pooling. MobileVitV3 has an odd head in that there is another conv layer after the global pool.

https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/mobilenetv3.py#L192-L211

you can see forward_features returns features earlier than before as that last conv is part of the 'head'. If you want old behaviour you can

model.reset_classifier(num_classes=0) or create_model('name', num_classes=0), reset the classifier part only instead of calling forward features, then pass the output ofmodel(x)` to your custom module

instead of forward_features only, call

x = model.forward_features(x)
x = model.forward_head(x, pre_logits=True)
JeonghwaYoo-R commented 2 years ago

@rwightman

Thank you! I solved this problem by creating a model through num_class=0.