ryujaehun / pytorch-gpu-benchmark

Using the famous cnn model in Pytorch, we run benchmarks on various gpu.
MIT License
224 stars 85 forks source link

TypeError: __call__() got an unexpected keyword argument 'pretrained' #21

Open joshhu opened 2 years ago

joshhu commented 2 years ago

Running env: docker in wsl2 20.10.16 nvidia pytorch image: nvcr.io/nvidia/pytorch:22.05-py3

Here is the error code

root@dd288f178ecc:/workspace/pytorch-gpu-benchmark# ./test.sh
start
benchmark start : 2022/06/28 08:39:29
Number of GPUs on current device : 1
CUDA Version : 11.7
Cudnn Version : 8400
Device Name : NVIDIA GeForce RTX 3080 Ti
uname_result(system='Linux', node='dd288f178ecc', release='5.10.16.3-microsoft-standard-WSL2', version='#1 SMP Fri Apr 2 22:23:49 UTC 2021', machine='x86_64', processor='x86_64')
                     scpufreq(current=2419.1979999999985, min=0.0, max=0.0)
                    cpu_count: 24
                    memory_available: 31270019072
Traceback (most recent call last):
  File "benchmark_models.py", line 183, in <module>
    train_result = train(precision)
  File "benchmark_models.py", line 81, in train
    model = getattr(model_type, model_name)(pretrained=False)
TypeError: __call__() got an unexpected keyword argument 'pretrained'
end
josemunozc commented 2 years ago

I had a similar issue. I tried this:

import torch
import torchvision.models as models

MODEL_LIST = {
    models.resnet: models.resnet.__all__[1:],
}

for model_type in MODEL_LIST.keys():
    for model_name in MODEL_LIST[model_type]:
        print(model_name)
        model = getattr(model_type, model_name)(pretrained=False)

and then

$ python3 mytest.py
ResNet18_Weights
Traceback (most recent call last):
  File "mytest.py", line 16, in <module>
    model = getattr(model_type, model_name)(pretrained=False)
TypeError: __call__() got an unexpected keyword argument 'pretrained'

I think the problem is the list of names that models.resnet.__all__[1:] returns:

['ResNet18_Weights', 'ResNet34_Weights', 'ResNet50_Weights', 'ResNet101_Weights', 'ResNet152_Weights', 'ResNeXt50_32X4D_Weights', 'ResNeXt101_32X8D_Weights', 'ResNeXt101_64X4D_Weights', 'Wide_ResNet50_2_Weights', 'Wide_ResNet101_2_Weights', 'resnet18', 'resnet34', 'resnet50', 'resnet101', 'resnet152', 'resnext50_32x4d', 'resnext101_32x8d', 'resnext101_64x4d', 'wide_resnet50_2', 'wide_resnet101_2']

that includes stuff that I think are not models (e.g. 'ResNet18_Weights')? I changed my test code to:

import torch
import torchvision.models as models

model_names = sorted(name for name in models.__dict__
                     if name.islower() and not name.startswith("__")
                     and callable(models.__dict__[name]))

MODEL_LIST = {
    models.resnet: [ name for name in model_names if 'resnet' in name],
}

for model_type in MODEL_LIST.keys():
    for model_name in MODEL_LIST[model_type]:
        print(model_name)
        model = getattr(model_type, model_name)(pretrained=False)

and then I get:

$ python3 mytest.py
resnet101
/scratch/c.c1045890/dl.examples/pytorch-gpu-benchmark/venv/lib/python3.7/site-packages/torchvision/models/_utils.py:209: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
  f"The parameter '{pretrained_param}' is deprecated since 0.13 and will be removed in 0.15, "
/scratch/c.c1045890/dl.examples/pytorch-gpu-benchmark/venv/lib/python3.7/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or `None` for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passing `weights=None`.
  warnings.warn(msg)
resnet152
resnet18
resnet34
resnet50
wide_resnet101_2
wide_resnet50_2

there are still a couple of warnings but at least no crash.

kpotoh commented 2 years ago

Switch to torchvision-0.12.0 and problem will disappear. torchvision-0.13 changed interface of using pretrained models

Full list of dependencies versions:

cufflinks          0.17.3   
matplotlib         3.5.3    
matplotlib-inline  0.1.6    
pandas             1.4.3    
plotly             5.10.0   
psutil             5.9.1    
torch              1.11.0   
torchvision        0.12.0 
PiotrDabkowski commented 1 year ago

Yeah, but we want to benchmark using newest torch and torchvision versions. Please fix!

zackertypical commented 1 year ago

i changed code due to the reply of @josemunozc, in benchmark_models.py

MODEL_LIST = {
    models.mnasnet: models.mnasnet.__all__[1:],
    models.resnet: models.resnet.__all__[1:],
    models.densenet: models.densenet.__all__[1:],
    models.squeezenet: models.squeezenet.__all__[1:],
    models.vgg: models.vgg.__all__[1:],
    models.mobilenet: models.mobilenet.mv2_all[1:],
    models.mobilenet: models.mobilenet.mv3_all[1:],
    models.shufflenetv2: models.shufflenetv2.__all__[1:],
}
for k, m_list in MODEL_LIST.items():
    MODEL_LIST[k] = [name for name in m_list if name.islower()]