Closed MrRaghav closed 4 years ago
Thank you @szha for suggestion on following link: https://github.com/apache/incubator-mxnet/pull/16487
I hope I have provided all the required information.
@MrRaghav thanks for creating the issue. What model of GPU are you using? What's the GPU memory size?
Also, have you tried using export MXNET_GPU_MEM_POOL_TYPE=Round
? https://mxnet.apache.org/api/faq/env_var#memory-options
Round: A memory pool that always rounds the requested memory size and allocates memory of the rounded size. MXNET_GPU_MEM_POOL_ROUND_LINEAR_CUTOFF defines how to round up a memory size. Caching and allocating buffered memory works in the same way as the naive memory pool.
Hello, please find the information in following points -
1) I am using rtx2080ti 2) To run sockeye, I used 3 GPUs and specified device ids. Memory of GPUs is as follows
_username@server:~/username/sockeye$ nvidia-smi --format=csv --query-gpu=memory.total
memory.total [MiB]
11019 MiB
11019 MiB
11019 MiB_
3) regarding export command, I tried running sockeye.train like below:
_export MXNET_GPU_MEM_POOL_TYPE=Round_
_python3 -m sockeye.train -s trained.BPE.de -t trained.BPE.en -vs dev.BPE.de -vt dev.BPE.en --shared-vocab \
--device-ids -3 --max-checkpoints 3 -o model_
But, still got the same error.
@fhieber do you have recommendation on how to run sockeye on the above GPU?
Lowering the batch size should definitely allow you to train a model. You could also try lowering the size of the model --transformer-model-size
and --num-embed
, or reduce the number of layers --num-layers
.
I am also not sure whether your output of pip3 list | grep mxnet
isn't concerning. To my knowledge it is not advisable to have 3 different versions of MXNet installed.
Hello, thank you for your suggestion. Actually, I've started working on machine translation just few days back and wanted to try all the possible scenarios before replying to you. Before contacting to the developers, I referred https://github.com/deepinsight/insightface/issues/257 and already tried by reducing default batch size from 4096 to 2048,1024, 512 and many more (according to the mutiple of 2/3 GPUs which I used to allot for the job). During all these cases, sockeye.train used to fail after 2-3 minutes of running.
But, yesterday I found one combination which 'seems' to have fixed out of memory issue. Due to this, I didn't uninstall other versions of mxnet (as suggested by you) for the time being.
1) I tried with 5 GPUs and reduced the batch size to 200 2) Following parameters of sockeye.train worked okay: --shared-vocab --num-embed 512 --batch-type sentence --batch-size 200 --num-layers 6:6 --transformer-model-size 512 --device-ids -5 -max-checkpoints 3 and it ran for ~33 minutes
3) It didn't prompt any memory issue but this prompted a new error:
[ERROR:root] Uncaught exception
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/ username/.local/lib/python3.7/site-packages/sockeye/train.py", line 997, in lr_scheduler
has been overwritten by learning_rate
in optimizer.
4) I have checked it and it doesn't seem to be related with out of memory. However, there is a similar issue mentioned under pytorch: https://github.com/pytorch/fairseq/issues/2049.
5) I have following versions of scarebleu, sockeye and mxnet sacrebleu 1.4.10 sockeye 2.1.7 mxnet 1.6.0 mxnet-cu101mkl 1.6.0 mxnet-mkl 1.6.0
6) I don't think opening random issues in every repository is a good idea but I can't find any such issue/solution in the issues section of sockeye, mxnet or sacrebleu.
I request to spare few minutes and suggest me if I missed anything.
on 1.
I tried with 5 GPUs and reduced the batch size to 200
due to the hardware and programming model design in CUDA, it's a good idea to always use a multiple of 32 in batch size.
Point 3: sacrebleu 1.4.10 requires a newer version of Sockeye. We recently published a newer version on pypi that is compatible with sacrebleu 1.4.10.
Hello, sorry for late reply. I was working on your suggestions and used sacrebleu version 1.4.3 to get successful model with sockeye 2.1.7.
Machine translation model was built successfully. I was able to run the sockeye.translate command but the translated results are not up to the mark. I will work in it.
Thank you so much for your time. I am closing this issue.
Description
When I run sockeye.train command with mxnet 1.6.0 , it provides two information in logs:
1) mxnet.base.MXNetError: [09:58:26] src/storage/./pooled_storage_manager.h:161: cudaMalloc retry failed: out of memory 2) learning rate from lr_scheduler has been overwritten by learning_rate in optimizer.
Basically I submit sockeye.train as a job in my server and its output comes as Run time 00:06:03, FAILED, ExitCode 1
Versions on software are as follows:
Error Message
To Reproduce
sockeye 2.1.7 calls mxnet 1.6.0 (installed for cuda).
Steps to reproduce
python3 -m sockeye.train -d training_data -vs dev.BPE.de -vt dev.BPE.en --shared-vocab -o model
What have you tried to solve it?
Environment
We recommend using our script for collecting the diagnositc information. Run the following command and paste the outputs below:
username@server:~/username/sockeye$ curl --retry 10 -s https://raw.githubusercontent.com/dmlc/gluon-nlp/master/tools/diagnose.py | python ----------Python Info---------- ('Version :', '2.7.16') ('Compiler :', 'GCC 8.3.0') ('Build :', ('default', 'Oct 10 2019 22:02:15')) ('Arch :', ('64bit', 'ELF')) ------------Pip Info----------- ('Version :', '20.1.1') ('Directory :', '/home/username/.local/lib/python2.7/site-packages/pip') ----------MXNet Info----------- No MXNet installed. ----------System Info---------- ('Platform :', 'Linux-4.19.0-9-amd64-x86_64-with-debian-10.4') ('system :', 'Linux') ('node :', 'server') ('release :', '4.19.0-9-amd64') ('version :', '#1 SMP Debian 4.19.118-2 (2020-04-29)') ----------Hardware Info---------- ('machine :', 'x86_64') ('processor :', '') Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian Address sizes: 46 bits physical, 48 bits virtual CPU(s): 48 On-line CPU(s) list: 0-47 Thread(s) per core: 2 Core(s) per socket: 12 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family: 6 Model: 79 Model name: Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz Stepping: 1 CPU MHz: 1200.726 CPU max MHz: 2900.0000 CPU min MHz: 1200.0000 BogoMIPS: 4400.00 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 30720K NUMA node0 CPU(s): 0-11,24-35 NUMA node1 CPU(s): 12-23,36-47 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single pti intel_ppin ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a rdseed adx smap intel_pt xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts flush_l1d ----------Network Test---------- Setting timeout: 10 Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0057 sec, LOAD: 0.4408 sec. Timing for D2L: http://d2l.ai, DNS: 0.0010 sec, LOAD: 0.0191 sec. Timing for FashionMNIST: https://repo.mxnet.io/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0009 sec, LOAD: 0.6619 sec. Error open Conda: https://repo.continuum.io/pkgs/free/, HTTP Error 403: Forbidden, DNS finished in 0.00109004974365 sec. Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0012 sec, LOAD: 0.7398 sec. Timing for GluonNLP: http://gluon-nlp.mxnet.io, DNS: 0.0012 sec, LOAD: 0.3613 sec. Timing for D2L (zh-cn): http://zh.d2l.ai, DNS: 0.0011 sec, LOAD: 0.0085 sec. Timing for GluonNLP GitHub: https://github.com/dmlc/gluon-nlp, DNS: 0.0000 sec, LOAD: 1.2439 sec.