Closed JeonghwaYoo-R closed 2 years ago
@JeonghwaYoo-R in 0.6 I altered forward_features, added forward_head to be more consistent across all model types, forward_features now breaks before final head layers and global pooling. MobileVitV3 has an odd head in that there is another conv layer after the global pool.
https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/mobilenetv3.py#L192-L211
you can see forward_features returns features earlier than before as that last conv is part of the 'head'. If you want old behaviour you can
model.reset_classifier(num_classes=0)
or create_model('name', num_classes=0), reset the classifier part only instead of calling forward features, then pass the output of
model(x)` to your custom module
instead of forward_features only, call
x = model.forward_features(x)
x = model.forward_head(x, pre_logits=True)
@rwightman
Thank you! I solved this problem by creating a model through num_class=0
.
Describe the bug When I update timm from 0.5.4 to latest version, I encountered error. I only used
forward_features
from timm, then append some custom module.The number of class is 8.
When I roll back the version to 0.5.4, no error occured.
I used
mobilenetv3_large_100_miil_in21k
To Reproduce Steps to reproduce the behavior:
mobilenetv3_large_100_miil_in21k
)Expected behavior A clear and concise description of what you expected to happen.
Screenshots If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
conda list
, 1.7.0 py3.8_cuda11.0.221_cudnn8.0.3_0]Additional context Add any other context about the problem here.