torch / nn

Other
1.34k stars 967 forks source link

Remove dependency of finput in accGradParams of SpatialConvolutionMM #501

Open fmassa opened 8 years ago

fmassa commented 8 years ago

Currently, SpatialConvolutionMM is quite fast on CPU, but its memory requirements are too high.

As it parallellizes the computations over batch examples, it requires a huge buffer (dependent on the batch size) for storing the unfolded image. I'm reasonably ok with it, as we could share this buffer over multiple convolutions to reduce memory usage (even though it's still requires a lot of memory). But, as accGradParameters reuses the finput which was already computed in forward (see here and here), this sharing can't be used for training. What is worse, reusing finput in accGradParameters forces us to have another huge buffer fgradInput, of the same size as finput. Thus, I think memory requirements are too high to use the CPU version of SpatialConvolutionMM in any real case scenario.

What do you think of the following:

There are other possibilities as well, but which hurts performance even more (but reduces the amount of memory required).

What do you think ?

fmassa commented 8 years ago

I gave it a second thought, and I think we don't need to have a buffer finput dependent on the batch size, but only on the number of threads that are going to be used. This could reduce the memory requirements by a big margin (say 10x for 12 threads on 128 batch size), without loss in runtime.

What do you think ? Am I missing something here ?