apache / mxnet

Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
https://mxnet.apache.org
Apache License 2.0
20.77k stars 6.79k forks source link

[BUG] lr_scheduler does not work as expect when training from checkpoint. #17357

Open kohillyang opened 4 years ago

kohillyang commented 4 years ago

Description

As I know, the optimizer decides the num_update according to its _index_update_count saved on each device, which means that If the trainer states on one GPU device and loaded into another device, the behavior of lr_sheduler relying on num_update will be different.

This is a little confusing, because one case is that the GPUs are shared by the whole lab, and when I want to restore the trainer states, the GPUs available maybe different. At least when the number of GPUs is same, the behavior should be same, or at least I should see a warning, and if the GPUs are different, we should receive a error/warning.

To Reproduce

If I saved the states on GPU 2 and loaded the states on GPU 2 too, the lr sheduler work as I expect, and the num_update at last is 2:

import gluoncv
import mxnet as mx
import mxnet.autograd as ag

net = gluoncv.model_zoo.resnet50_v1b(pretrained=True)
lr_scheduler = mx.lr_scheduler.MultiFactorScheduler(step=[2, 4],
                                                    warmup_mode="constant", factor=.1,
                                                    base_lr=1e-2,)
net.collect_params().reset_ctx([mx.gpu(2)])
trainer = mx.gluon.Trainer(
    net.collect_params(),
    'sgd',
    {'learning_rate': 1e-2,
     'lr_scheduler': lr_scheduler
     })

with ag.record():
    y_hat = net(mx.nd.random.randn(1, 3, 224, 224, ctx=mx.gpu(2)))
ag.backward(y_hat)
trainer.step(1)
print(trainer.optimizer.num_update)
print(trainer.learning_rate)
trainer.save_states(fname="test.states")

net.collect_params().reset_ctx([mx.gpu(2)])
lr_scheduler2 = mx.lr_scheduler.MultiFactorScheduler(step=[2, 4],
                                                    warmup_mode="constant", factor=.1,
                                                    base_lr=1e-2,)
trainer2 = mx.gluon.Trainer(
    net.collect_params(),
    'sgd',
    {'learning_rate': 1e-2,
     'lr_scheduler': lr_scheduler2
     })
trainer2.load_states("test.states")
with ag.record():
    y_hat = net(mx.nd.random.randn(1, 3, 224, 224, ctx=mx.gpu(2)))
ag.backward(y_hat)
print(trainer2.learning_rate)
print(trainer2.optimizer.num_update)
trainer2.step(1)
print(trainer2.learning_rate)
print(trainer2.optimizer.num_update)
outputs:
1
0.01
0.01
1
0.01
2

If I saved the states on GPU2 and loaded it on GPU3, the lr sheduler then does not work as I expected, the num_update is 1 instead.

import gluoncv
import mxnet as mx
import mxnet.autograd as ag

net = gluoncv.model_zoo.resnet50_v1b(pretrained=True)
lr_scheduler = mx.lr_scheduler.MultiFactorScheduler(step=[2, 4],
                                                    warmup_mode="constant", factor=.1,
                                                    base_lr=1e-2,)
net.collect_params().reset_ctx([mx.gpu(2)])
trainer = mx.gluon.Trainer(
    net.collect_params(),
    'sgd',
    {'learning_rate': 1e-2,
     'lr_scheduler': lr_scheduler
     })

with ag.record():
    y_hat = net(mx.nd.random.randn(1, 3, 224, 224, ctx=mx.gpu(2)))
ag.backward(y_hat)
trainer.step(1)
print(trainer.optimizer.num_update)
print(trainer.learning_rate)
trainer.save_states(fname="test.states")

net.collect_params().reset_ctx([mx.gpu(3)])
lr_scheduler2 = mx.lr_scheduler.MultiFactorScheduler(step=[2, 4],
                                                    warmup_mode="constant", factor=.1,
                                                    base_lr=1e-2,)
trainer2 = mx.gluon.Trainer(
    net.collect_params(),
    'sgd',
    {'learning_rate': 1e-2,
     'lr_scheduler': lr_scheduler2
     })
trainer2.load_states("test.states")
with ag.record():
    y_hat = net(mx.nd.random.randn(1, 3, 224, 224, ctx=mx.gpu(3)))
ag.backward(y_hat)
print(trainer2.learning_rate)
print(trainer2.optimizer.num_update)
trainer2.step(1)
print(trainer2.learning_rate)
print(trainer2.optimizer.num_update)
outputs:
1
0.01
0.01
1
0.01
1 # Wrong

Steps to reproduce

(Paste the commands you ran that produced the error.)

1. 2.

What have you tried to solve it?

1. 2.

Environment

We recommend using our script for collecting the diagnositc information. Run the following command and paste the outputs below:

curl --retry 10 -s https://raw.githubusercontent.com/dmlc/gluon-nlp/master/tools/diagnose.py | python

# paste outputs here

Environment

We recommend using our script for collecting the diagnositc information. Run the following command and paste the outputs below:

curl --retry 10 -s https://raw.githubusercontent.com/dmlc/gluon-nlp/master/tools/diagnose.py | python

# paste outputs here
----------Python Info----------
Version      : 3.6.5
Compiler     : GCC 7.2.0
Build        : ('default', 'Apr 29 2018 16:14:56')
Arch         : ('64bit', '')
------------Pip Info-----------
Version      : 18.1
Directory    : /data2/zyx/yks/anaconda3/lib/python3.6/site-packages/pip
----------MXNet Info-----------
/data2/zyx/yks/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
Version      : 1.6.0
Directory    : /data2/zyx/yks/anaconda3/lib/python3.6/site-packages/mxnet
Num GPUs     : 9
Commit Hash   : 8a3519934f3ee5e9ac9406c2a4edb377af5e8cc7
----------System Info----------
Platform     : Linux-4.4.0-122-generic-x86_64-with-debian-stretch-sid
system       : Linux
node         : 7b5642bf21b5
release      : 4.4.0-122-generic
version      : #146-Ubuntu SMP Mon Apr 23 15:34:04 UTC 2018
----------Hardware Info----------
machine      : x86_64
processor    : x86_64
Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                56
On-line CPU(s) list:   0-55
Thread(s) per core:    2
Core(s) per socket:    14
Socket(s):             2
NUMA node(s):          2
Vendor ID:             GenuineIntel
CPU family:            6
Model:                 79
Model name:            Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz
Stepping:              1
CPU MHz:               1284.765
CPU max MHz:           3500.0000
CPU min MHz:           1200.0000
BogoMIPS:              5201.94
Virtualization:        VT-x
L1d cache:             32K
L1i cache:             32K
L2 cache:              256K
L3 cache:              35840K
NUMA node0 CPU(s):     0-13,28-41
NUMA node1 CPU(s):     14-27,42-55
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb invpcid_single intel_pt retpoline kaiser tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts
----------Network Test----------
Setting timeout: 10
Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0021 sec, LOAD: 1.8166 sec.
Timing for GluonNLP GitHub: https://github.com/dmlc/gluon-nlp, DNS: 0.0008 sec, LOAD: 2.6594 sec.
Timing for GluonNLP: http://gluon-nlp.mxnet.io, DNS: 0.6507 sec, LOAD: 2.2549 sec.
Timing for D2L: http://d2l.ai, DNS: 1.3830 sec, LOAD: 0.8381 sec.
Timing for D2L (zh-cn): http://zh.d2l.ai, DNS: 0.0614 sec, LOAD: 0.7182 sec.
Timing for FashionMNIST: https://repo.mxnet.io/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.3121 sec, LOAD: 1.5671 sec.
Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.1946 sec, LOAD: 4.2506 sec.
Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.1737 sec, LOAD: 0.8662 sec.
wkcn commented 4 years ago

It is a bug. MXNet saves index_update_counts by device_id. When device_id is changed, index_update_counts will be reset. https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/optimizer/optimizer.py#L408

wkcn commented 4 years ago

A temporary solution: Use the environment variable CUDA_VISIBLE_DEVICES to select the GPU devices, and fix the device id in the code. CUDA_VISIBLE_DEVICES=1,2 python train.py

WilliamOnVoyage commented 4 years ago

Also see this issue with distributed training framework, since the context info (device_id) are different among distributed processes, and is not saved as part of the checkpoint.