Open fmassa opened 8 years ago
I gave it a second thought, and I think we don't need to have a buffer finput
dependent on the batch size, but only on the number of threads that are going to be used. This could reduce the memory requirements by a big margin (say 10x for 12 threads on 128 batch size), without loss in runtime.
What do you think ? Am I missing something here ?
Currently,
SpatialConvolutionMM
is quite fast on CPU, but its memory requirements are too high.As it parallellizes the computations over batch examples, it requires a huge buffer (dependent on the batch size) for storing the unfolded image. I'm reasonably ok with it, as we could share this buffer over multiple convolutions to reduce memory usage (even though it's still requires a lot of memory). But, as
accGradParameters
reuses thefinput
which was already computed in forward (see here and here), this sharing can't be used for training. What is worse, reusingfinput
inaccGradParameters
forces us to have another huge bufferfgradInput
, of the same size asfinput
. Thus, I think memory requirements are too high to use the CPU version ofSpatialConvolutionMM
in any real case scenario.What do you think of the following:
finput
inaccGradParameters
. This reduce the amount of buffer memory by 2 (no need offgradInput
anymore, and also buffers can be shared between modules. Forward timings stays the same, there is a penalty for backward thoughThere are other possibilities as well, but which hurts performance even more (but reduces the amount of memory required).
What do you think ?