def drop_path(x, drop_prob):
if drop_prob > 0.:
keep_prob = 1.-drop_prob
mask = Variable(torch.cuda.FloatTensor(x.size(0), 1, 1, 1).bernoulli_(keep_prob))
x.div_(keep_prob)
x.mul_(mask)
return x
Question:
Why do we drop on the batchsize dimension (the 1st dimension)?
Shouldn't we randomly keep and drop some of the filters (on the 2nd dimension)?
Thank you :-)
My understanding is that dropping from the second dimension would be an implementation of "drop channel", whereas dropping from the first dimension is "drop path".
This is the dropout function in
utils.py
:Question: Why do we drop on the batchsize dimension (the 1st dimension)? Shouldn't we randomly keep and drop some of the filters (on the 2nd dimension)? Thank you :-)