Closed chenyanyin closed 5 years ago
See the detail in the README: "python -m torch.distributed.launch --nproc_per_node=gpu_num train.py". Set the gpu_num=1 means running with 1 gpu.
i got it, thx. Sorry the problem that i met is not because of it, but i solved. thx again.
@megvii-wzc Does the code support continuing training when interrupted? How to do it if support?
You just need to add another argument -c to specify the path of model that you want to continue.
ok, thx.
thx