BVLC / caffe

Caffe: a fast open framework for deep learning.
http://caffe.berkeleyvision.org/
Other
34.04k stars 18.7k forks source link

dropout in place incompatible with max pooling #117

Closed mavenlin closed 10 years ago

mavenlin commented 10 years ago

It took me several hours to finally find this problem. In my own implementation of dropout in cuda-convnet, I randomly drop half of the nodes during training time, and multiply by one half during test. In caffe, the nodes are multiplied by two during training time, and nothing is done during testing. The two approach seems the same, but not the same when dropout is applied on max pooling layer in place. As the backward pass of max pooling layer needs its output, but dropout disrupts by multiplying a factor of two. For dropout, this can be resolved by multiplying one half during test time.

Any ideas to prevent in place operation when the data is needed in the backward pass? as in cuda-convnet, there is a useAct flag that indicates the activation data is needed by the layer in the future, and should not be overwritten.

kloudkl commented 10 years ago

@dnouri implemented dropout in cuda-convnet with a mask matrix which drops out the data while keeping the data intact.

kloudkl commented 10 years ago

Sorry, the above implementation is the same with yours. But in practice, dropout is usually performed against the fully connected layer. Is there any special reason to apply it to the max pooling layer?

mavenlin commented 10 years ago

Yes, in my paper Network in Network, dropout is applied on max pooling layer. Also in convolutional maxout networks dropout is applied on max pooling layer, One example here: https://github.com/lisa-lab/pylearn2/blob/master/pylearn2/scripts/papers/maxout/cifar10.yaml

mavenlin commented 10 years ago

we can keep the data intact by allocating a top different from the bottom though. Two disadvantages here:

  1. extra memory.
  2. I didn't notice it can cause a problem and just do it in place.
Yangqing commented 10 years ago

@mavenlin if we will need to keep both versions, having a separate copy would probably be necessary so I won't worry about extra memory too much. It'll indeed be helpful to have a mechanism to check if a blob could be used in in-place operations, probably in net.cpp when we construct the layer.

sguada commented 10 years ago

I think that could be fixed by changing the max pooling layer, which shouldn't rely on comparing the max values for backprop, since that could introduce errors. For example if two inputs have the same max value then in the backprop both would propagate the gradients. If the max pooling would rely on a mask, similarly to the dropout-layer, then there will be no problem.

Sergio

2014-02-17 Yangqing Jia notifications@github.com:

@mavenlin https://github.com/mavenlin if we will need to keep both versions, having a separate copy would probably be necessary so I won't worry about extra memory too much. It'll indeed be helpful to have a mechanism to check if a blob could be used in in-place operations, probably in net.cpp when we construct the layer.

Reply to this email directly or view it on GitHubhttps://github.com/BVLC/caffe/issues/117#issuecomment-35305153 .

Yangqing commented 10 years ago

That is true. Explicitly storing the indices ("sufficient statistics" for the mask) during the forward pass would help (and will also increase speed).

Yangqing

On Mon, Feb 17, 2014 at 9:53 AM, Sergio Guadarrama <notifications@github.com

wrote:

I think that could be fixed by changing the max pooling layer, which shouldn't rely on comparing the max values for backprop, since that could introduce errors. For example if two inputs have the same max value then in the backprop both would propagate the gradients. If the max pooling would rely on a mask, similarly to the dropout-layer, then there will be no problem.

Sergio

2014-02-17 Yangqing Jia notifications@github.com:

@mavenlin https://github.com/mavenlin if we will need to keep both versions, having a separate copy would probably be necessary so I won't worry about extra memory too much. It'll indeed be helpful to have a mechanism to check if a blob could be used in in-place operations, probably in net.cpp when we construct the layer.

Reply to this email directly or view it on GitHub< https://github.com/BVLC/caffe/issues/117#issuecomment-35305153>

.

Reply to this email directly or view it on GitHubhttps://github.com/BVLC/caffe/issues/117#issuecomment-35307404 .

sguada commented 10 years ago

I will work on that. Right now, MaxPoolBackward accounts for 3.84% of the time, while MaxPoolForward accounts for only 0.79% of the time.

Sergio

2014-02-17 Yangqing Jia notifications@github.com:

That is true. Explicitly storing the indices ("sufficient statistics" for the mask) during the forward pass would help (and will also increase speed).

Yangqing

On Mon, Feb 17, 2014 at 9:53 AM, Sergio Guadarrama < notifications@github.com

wrote:

I think that could be fixed by changing the max pooling layer, which shouldn't rely on comparing the max values for backprop, since that could introduce errors. For example if two inputs have the same max value then in the backprop both would propagate the gradients. If the max pooling would rely on a mask, similarly to the dropout-layer, then there will be no problem.

Sergio

2014-02-17 Yangqing Jia notifications@github.com:

@mavenlin https://github.com/mavenlin if we will need to keep both

versions, having a separate copy would probably be necessary so I won't worry about extra memory too much. It'll indeed be helpful to have a mechanism to check if a blob could be used in in-place operations, probably in net.cpp when we construct the layer.

Reply to this email directly or view it on GitHub< https://github.com/BVLC/caffe/issues/117#issuecomment-35305153>

.

Reply to this email directly or view it on GitHub< https://github.com/BVLC/caffe/issues/117#issuecomment-35307404>

.

Reply to this email directly or view it on GitHubhttps://github.com/BVLC/caffe/issues/117#issuecomment-35307589 .

sguada commented 10 years ago

@mavenlin take a look to #162 and let me know if that fix the problem. Comments are welcome.

@Yangqing I stored the indices, but in the Backward GPU I still needed to do extra comparisons to avoid races.

shelhamer commented 10 years ago

Addressed by #162.