Closed 501177639 closed 8 years ago
Are you interested in verifying this and submit a PR? looping in original author and reviewer @winstywang @wistone
Checked with @wistone Indeed, this implementation is not correct. @501177639 Could you make a PR on this issue?
I have already tried to modify it, but I'm sorry that the mxnet code seems quite difficult for me to understand, such as the function unpack_patch2col
I have added a fix for this issue in the respective "feature/convolution-dilate" branches of my forks of mshadow and mxnet at
https://github.com/kadeng/mshadow/tree/feature/convolution-dilate https://github.com/kadeng/mxnet/tree/feature/convolution-dilate
I made sure by looking at Impulse Responses of the Convolution op, that it's actually doing the right thing.
The change cuts across both projects, so I cannot put this in a single pull request. The changes come with a unittest, and I manually made sure (by looking at Impulse Responses of the operator) that it's actually working as intended.
Please review, and incorporate the changes into the main codebase as you see fit.
@kadeng Thanks for you contribution. Please first make a PR to mshadow and when that's merged, make a PR to mxnet. This is our standard protocol when working on multiple repos
I created the first PR in mshadow. It passes all automated checks by now..
The first merge request has been merged into mshadow. I opened the second one for MXNet. See https://github.com/dmlc/mxnet/pull/2069
This has been merged now, so the issue can be closed.
https://github.com/dmlc/mxnet/blob/master/src/operator/convolution-inl.h#L341 when dilate != 1, is ksizey * param.dilate[0] - 1 ? I think it's param_.dilate[0] * ( ksize_y - 1) + 1 .
The same implement in caffe here https://github.com/BVLC/caffe/blob/master/src/caffe/layers/conv_layer.cpp#L17
Deeplab also has a paper to discuss it http://arxiv.org/pdf/1412.7062v3.pdf Figure 1 In the paper, deeplab calls dilate as hole and code https://bitbucket.org/deeplab/deeplab-public/downloads