apache / mxnet

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
https://mxnet.apache.org
Apache License 2.0
20.75k stars 6.8k forks source link

Unreasonable performance of resnext models provided in model_zoo, evaluated by score.py #11149

Closed HuichuanLiu closed 6 years ago

HuichuanLiu commented 6 years ago

Note: Providing complete information in the most concise form is the best way to get help. This issue template serves as the checklist for essential information to most of the technical issues and bug reports. For non-technical issues and feature requests, feel free to present the information in what you believe is the best form.

For Q & A and discussion, please start a discussion thread at https://discuss.mxnet.io

Description

I used the incubator-mxnet/example/image-classification/score.py to evaluate resnext-50、resnext-101、resnext-101-64d,but none of them reached a reasonable result.

python score.py --model imagenet1k-resnext-50 --gpus 2 --data-val /data3/liuhuichuan/Data/imagenet/imagenet1k-val.rec

INFO:root:Finished with 439.479926 images per second
INFO:root:('accuracy', 0.00011988491048593351)
INFO:root:('top_k_accuracy_5', 0.0020580242966751917)
python score.py --model imagenet1k-resnext-101 --gpus 2 --data-val /data3/liuhuichuan/Data/imagenet/imagenet1k-val.rec

INFO:root:Finished with 270.428020 images per second
INFO:root:('accuracy', 7.9923273657289009e-05)
INFO:root:('top_k_accuracy_5', 0.0022778132992327367)
python score.py --model imagenet1k-resnext-101-64x4d --gpus 2 --data-val /data3/liuhuichuan/Data/imagenet/imagenet1k-val.rec

INFO:root:Finished with 160.059377 images per second
INFO:root:('accuracy', 0.48345588235294118)
INFO:root:('top_k_accuracy_5', 0.6939338235294118)

However, the resnet-101 model works perfectly well

python score.py --model imagenet1k-resnet-101 --gpus 2 --data-val /data3/liuhuichuan/Data/imagenet/imagenet1k-val.rec

INFO:root:Finished with 383.548030 images per second
INFO:root:('accuracy', 0.76856218030690537)
INFO:root:('top_k_accuracy_5', 0.93300431585677746)

It seems the ResNeXt models are not appropriately trained or something(preprocess?) does not fit the model in score.py?

Environment info (Required)

----------Python Info----------
Version      : 3.6.4
Compiler     : GCC 7.2.0
Build        : ('default', 'Jan 16 2018 18:10:19')
Arch         : ('64bit', '')
------------Pip Info-----------
Version      : 9.0.1
Directory    : /home/liuhuichuan/anaconda3/envs/i3d_mxn/lib/python3.6/site-packages/pip
----------MXNet Info-----------
Version      : 1.1.0
Directory    : /home/liuhuichuan/anaconda3/envs/i3d_mxn/lib/python3.6/site-packages/mxnet
Commit Hash   : 07a83a0325a3d782513a04f47d711710972cb144
----------System Info----------
Platform     : Linux-4.4.0-62-generic-x86_64-with-debian-stretch-sid
system       : Linux
node         : jf-gpu003
release      : 4.4.0-62-generic
version      : #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017
----------Hardware Info----------
machine      : x86_64
processor    : x86_64
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                32
On-line CPU(s) list:   0-31
Thread(s) per core:    2
Core(s) per socket:    8
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
Stepping:              1
CPU MHz:               1236.867
CPU max MHz:           3000.0000
CPU min MHz:           1200.0000
BogoMIPS:              4201.09
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              20480K
NUMA node0 CPU(s):     0-7,16-23
NUMA node1 CPU(s):     8-15,24-31
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0259 sec, LOAD: 1.6091 sec.
Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0223 sec, LOAD: 1.3577 sec.
Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0249 sec, LOAD: 2.8521 sec.
Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.3443 sec, LOAD: 1.3495 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0243 sec, LOAD: 5.4109 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0222 sec, LOAD: 0.9340 sec.

Package used (Python/R/Scala/Julia): (I'm using ...)

For Scala user, please provide:

  1. Java version: (java -version)
  2. Maven version: (mvn -version)
  3. Scala runtime if applicable: (scala -version)

For R user, please provide R sessionInfo():

Build info (Required if built from source)

Installed from pip of python3.6 Compiler (gcc/clang/mingw/visual studio):

MXNet commit hash: a48480b706763203a294cb76eb8916517ff214c1 Build config: (Paste the content of config.mk, or the build command.)

Error Message:

(Paste the complete error message, including stack trace.)

Minimum reproducible example

(If you are using your own code, please provide a short script that reproduces the error. Otherwise, please provide link to the existing example.)

Steps to reproduce

(Paste the commands you ran that produced the error.)

  1. pack val files in local directory instead of redownloading,It shouldn't be a problem because the resnet uses the same .rec and works well.
    
    # if [ ! -e ILSVRC2012_img_val.tar ]; then
    #     wget $1
    # fi
    # mkdir -p val
    # tar -xf ILSVRC2012_img_val.tar -C val
    wget http://data.mxnet.io/models/imagenet/resnet/val.lst -O imagenet1k-val.lst

CUR_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" MX_DIR=${CUR_DIR}/../../../

python ${CUR_DIR}/../../../tools/im2rec.py --resize 256 --quality 90 --num-thread 16 imagenet1k-val my_path_to_store_imagenet_val_data/

rm -rf val

2. run score.py in incubator-mxnet/example/image-classification

python score.py --model imagenet1k-resnext-101-64x4d --gpus 2 --data-val /data3/liuhuichuan/Data/imagenet/imagenet1k-val.rec


## What have you tried to solve it?

1. switch the git branch from mater to v1.2.0 but nothing changed.
lanking520 commented 6 years ago

Hi @HuichuanLiu , I think your problem is more like a question. Please submit on MXNet discussion forum (https://discuss.mxnet.io), where it will get a wider audience and allow other to learn as well. @nswamy can you add 'Question' tag to this issue?

HuichuanLiu commented 6 years ago

@lanking520 Well, it is not a HOW TO question, but more like a potential Problem in the checkpoint or scoring process. There're very few extra codes and I'm like just cloned the project and run the given scripts.

lanking520 commented 6 years ago

@HuichuanLiu Sorry for that. I do see you are using anaconda with MXNet 1.1. Have you tried build from source? Which Ubuntu machine are you using? I will try to reproduce the problem you have using a Linux Instance.

HuichuanLiu commented 6 years ago

@lanking520 Right, I've tried it with mxnet-cu80 1.2.0 from pip:

[12:41:56] src/io/iter_image_recordio_2.cc:170: ImageRecordIOParser2: /data3/liuhuichuan/Data/imagenet/imagenet1k-val.rec, use 4 threads for decoding..
[12:42:01] src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v0.9.4. Attempting to upgrade...
[12:42:01] src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded!
[12:42:06] src/operator/nn/./cudnn/./cudnn_algoreg-inl.h:107: Running performance tests to find the best convolution algorithm, this can take a while... (setting env variable MXNET_CUDNN_AUTOTUNE_DEFAULT to 0 to disable)
INFO:root:Finished with 160.141619 images per second
INFO:root:('accuracy', 0.48345588235294118)
INFO:root:('top_k_accuracy_5', 0.6939338235294118)
  1. Will build from source later, but it takes some time

  2. Ubuntu 16.04.2 LTS (GNU/Linux 4.4.0-62-generic x86_64)

Thanks for your help : )

lanking520 commented 6 years ago

Thanks for your test. I feel something tricky in here. Do you think the model provided have some issues? Just like you said, all configuration LGTM, but just some model super well and some of them super weird. Sounds like using dog-cat classifier to classify rocket. I will reach out to somebody who is working on MXNet model sever so we can test it over there, sounds good?

HuichuanLiu commented 6 years ago

Agree, and It might involve not only the symbol definition, the pretrain params and the training process, but probably mismatch between the model and the scoring procedure, like the preprocess?

FYI, I attempted to implement a gluon version resnext-101-64x4d based on the provided symbol file and restored param values from the provided checkpoint, this work ended at around acc=0.47+. with preprocess like:

def mx_preprocess(img, ctx):
    # img_cvt = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    img_nd = mx.nd.array(img)
    img_resize = mx.image.resize_short(src=img_nd, size=256)
    img_crop, _ = mx.image.center_crop(src=img_resize, size=(224, 224))
    img_trans = mx.nd.transpose(img_crop, [2, 0, 1])
    img_reshape = img_trans.reshape((1, 3, 224, 224))
    return mx.nd.array(img_reshape, ctx=ctx)

Since resNeXt is quite new in the model_zoo, a re-evaluation could be very helpful.

lanking520 commented 6 years ago

Hi @HuichuanLiu Can you provide me the link where you download the model. There are too many model zoos in MXNet now

HuichuanLiu commented 6 years ago
  1. I used the incubator-mxnet/example/image-classification/score.py, where incubator-mxnet/example/image-classification/common/modelzoo.download_model() is called, with url explicitly given in codes:
    
    _base_model_url = 'http://data.mxnet.io/models/'
    _default_model_info = {
    'imagenet1k-inception-bn': {'symbol':_base_model_url+'imagenet/inception-bn/Inception-BN-symbol.json',
                             'params':_base_model_url+'imagenet/inception-bn/Inception-BN-0126.params'},
    'imagenet1k-resnet-18': {'symbol':_base_model_url+'imagenet/resnet/18-layers/resnet-18-symbol.json',
                             'params':_base_model_url+'imagenet/resnet/18-layers/resnet-18-0000.params'},
    'imagenet1k-resnet-34': {'symbol':_base_model_url+'imagenet/resnet/34-layers/resnet-34-symbol.json',
                             'params':_base_model_url+'imagenet/resnet/34-layers/resnet-34-0000.params'},
    'imagenet1k-resnet-50': {'symbol':_base_model_url+'imagenet/resnet/50-layers/resnet-50-symbol.json',
                             'params':_base_model_url+'imagenet/resnet/50-layers/resnet-50-0000.params'},
    'imagenet1k-resnet-101': {'symbol':_base_model_url+'imagenet/resnet/101-layers/resnet-101-symbol.json',
                             'params':_base_model_url+'imagenet/resnet/101-layers/resnet-101-0000.params'},
    'imagenet1k-resnet-152': {'symbol':_base_model_url+'imagenet/resnet/152-layers/resnet-152-symbol.json',
                             'params':_base_model_url+'imagenet/resnet/152-layers/resnet-152-0000.params'},
    'imagenet1k-resnext-50': {'symbol':_base_model_url+'imagenet/resnext/50-layers/resnext-50-symbol.json',
                             'params':_base_model_url+'imagenet/resnext/50-layers/resnext-50-0000.params'},
    'imagenet1k-resnext-101': {'symbol':_base_model_url+'imagenet/resnext/101-layers/resnext-101-symbol.json',
                             'params':_base_model_url+'imagenet/resnext/101-layers/resnext-101-0000.params'},
    'imagenet1k-resnext-101-64x4d': {'symbol':_base_model_url+'imagenet/resnext/101-layers/resnext-101-64x4d-symbol.json',
                                     'params':_base_model_url+'imagenet/resnext/101-layers/resnext-101-64x4d-0000.params'},
    'imagenet11k-resnet-152': {'symbol':_base_model_url+'imagenet-11k/resnet-152/resnet-152-symbol.json',
                             'params':_base_model_url+'imagenet-11k/resnet-152/resnet-152-0000.params'},
    'imagenet11k-place365ch-resnet-152': {'symbol':_base_model_url+'imagenet-11k-place365-ch/resnet-152-symbol.json',
                                          'params':_base_model_url+'imagenet-11k-place365-ch/resnet-152-0000.params'},
    'imagenet11k-place365ch-resnet-50': {'symbol':_base_model_url+'imagenet-11k-place365-ch/resnet-50-symbol.json',
                                         'params':_base_model_url+'imagenet-11k-place365-ch/resnet-50-0000.params'},

In short, it's http://data.mxnet.io/models/imagenet/resnext/101-layers/resnext-101-64x4d-symbol.json and  http://data.mxnet.io/models/imagenet/resnext/101-layers/resnext-101-64x4d-0000.params
2. Also tried to implement gluon from params http://data.dmlc.ml/models/imagenet/resnext/101-layers/resnext-101-64x4d-0000.params and symbo.json http://data.dmlc.ml/models/imagenet/resnext/101-layers/resnext-101-64x4d-symbol.json from the homepage model_zoo, whichi also ended with approximately ~47%. Both converted gluon and the original module.
lanking520 commented 6 years ago

Hi @HuichuanLiu thanks for your input and testing. I am not an expert working on these models, but I will definitely find one for you: @szha can you take over from here as it lives in MXNet Model zoo?

HuichuanLiu commented 6 years ago

Thanks @lanking520 And here're some updates:

  1. My experiments shows resnet-152 restored from gluon model_zoo and from the module symbol files require different preprocess. I didn't find any clear description about this in mxnet docs and it will be nice if you can add it, it's quite confusing for the green hands like me.

  2. I got a higher accuracy from gluon model, comparing to these statistics. Is it another inconsistence between the module and the gluon model? Or perhaps about the resnet version?

Details: I replaced resnext-101 with resnet-152 in score.py and received acc=~0.765, exactly the same as the doc shows

Then I repeated the same procedure, i.e. the same data and the same mx.io.RecordIter setting, but loaded the resnet-152 model with gluon API(), instead of the default module symbol files.

from mxnet.gluon.model_zoo.vision.resnet import get_resnet
net = get_resnet(version=2, num_layers=152, pretrained=True, root='./', ctx=ctx[1])

This leaded to broken predictions, it gives 916 after argmax for all samples, because of unnormalized input.

Next I added a standard preprocess according to the gluon model

All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (N x 3 x H x W), where N is the batch size, and H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. The transformation should preferrably happen at preprocessing

It takes the model to acc=0.773,about 0.012 higher than the doc claims

lanking520 commented 6 years ago

@HuichuanLiu Many thanks for your hard work. I have checked with MXNet Gluon team. Currently we do not maintain the model zoo you are using for a long time (more than 1 year). Please try to use Gluon models that are supported. Unfortunately, I didn't see ResNext there. Will keep track on this.

HuichuanLiu commented 6 years ago

@lanking520 OK, I've turned to resnets and they work perfectly. Thanks for your help. Feel free to close this issue or keep it if your team has further plans in future.

HuichuanLiu commented 6 years ago

OK, my colleague solves this problem. Just apply the Inception Preprocess, i.e. img/255-rgb_mean with --rgb-mean=[123.68,116.779,103.939] Then the imagenet1k-resnext-101-64x4d model delivers acc~=0.79, as good as your docs describe.

I would strongly recommend you to modify or delete the description in the mxnet imageclassifcation example page, it says

our Resnet does not need to specify the RGB mean due the data batch normalization layer. While the inception models needs --rgb-mean 123.68,116.779,103.939

Although resnext is a hybrid variant of resnet and inception, it has a data batch normalization as first layers, so in my understanding, it does't need to reduce the rgb-mean.

However, according to the experiment, it does indeed.