daijifeng001 / MNC

Instance-aware Semantic Segmentation via Multi-task Network Cascades
Other
489 stars 182 forks source link

out of memory when training on my own data #25

Closed qinhaifangpku closed 8 years ago

qinhaifangpku commented 8 years ago

hi, all! I am use the MNC to train my model on my data. every image in my data has vary 1 to 200 masks. when I run it on about 10G GPU, it happen to this error:(I print the mask_list in lib/pylayer/mnc_data_layer.py, line 150)

image_size = (406, 438) mask_list:2 image_size = (406, 438) mask_list:1 image_size = (406, 438) mask_list:42 image_size = (406, 439) mask_list:21 image_size = (406, 438) mask_list:2 image_size = (406, 438) mask_list:3 image_size = (406, 439) mask_list:71 Traceback (most recent call last): File "./tools/train_net.py", line 96, in _solver.train_model(args.max_iters) File "/data2/qinhaifang/MNC/tools/../lib/caffeWrapper/SolverWrapper.py", line 128, in train_model self.solver.step(1) MemoryError

thank you advance for your help!

HaozhiQi commented 8 years ago

I doubt this error is due to your huge amount of masks? Since the masks are padding to be of the shape as the largest masks. It's possible that you have a 400x400x70 array there.

qinhaifangpku commented 8 years ago

@Oh233 thank you! I just found it , so I have chose the masks randomly to train, it can be work! thank you very much!