Open z744364418p opened 6 years ago
That't maybe because the lack of memory. reduce the cpu thread used for preprocessing
See the README file, which says: "if you have bug about short of memory, set the 'n_worker_preprocessing' in config_submit.py to a int that is smaller than your core number."
Alternatively, you can add memory.
For reference, the preprocessing of the stage2 data completed successfully for me on AWS on a r4.2xlarge instance (8 CPU, 61GB RAM), but got the out of memory failure on a t2.2xlarge (8 CPU, 32 GB RAM)... i.e., looks like you need more than 4GB/CPU.
Traceback (most recent call last): File "prepare.py", line 374, in
full_prep(step1=True,step2=True)
File "prepare.py", line 181, in fullprep
=pool.map(partial_savenpy,range(N))
File "/home/gpu/anaconda2/lib/python2.7/multiprocessing/pool.py", line 253, in map
return self.map_async(func, iterable, chunksize).get()
File "/home/gpu/anaconda2/lib/python2.7/multiprocessing/pool.py", line 572, in get
raise self._value
MemoryError