aharley / segaware

Segmentation-Aware Convolutional Networks Using Local Attention Masks
145 stars 31 forks source link

Using Im2col and bottom_is_im2col needs more memory #9

Open lazatsoc opened 5 years ago

lazatsoc commented 5 years ago

I am trying to train VGG16 with my own data. I have cropped the images to 224x224. When I train with VGG16 as provided by Caffe model zoo (https://gist.github.com/ksimonyan/211839e770f7b538e2d8) I can train with batch size 32. After replacing all convolution layers with im2col followed by a convolution layer with bottom_is_im2col I can train with maximum batch size 8 without "Out of memory" error. First of all, I wonder if this is normal behavior given that typical convolution layers use im2col internally. Secondly, is there a way to reduce memory needs? Thanks in advance,