Closed pavanky closed 7 years ago
@soumith Thanks for the links. This PR is not super important from our end :) Sent it in because it was the last commit on our fork that diverged from upstream torch.
@soumith , @pavanky , Just want to point out that MKL kernels depend on special data layouts so that we need to allocate a large buffer before convolution and also transform the output format after convolution. I am not saying it will affect the whole performance. But this is not convenient to combine MKL into torchnn, as MKL only defines some nn functions that utilize the special data layout without transformation. Besides, what about other processors that do not support MKL? I think torch should maintain its own convolution implementations.
Wouldn't it be better to just integrate nnpack then?
Closing this as this does not seem relevant anymore.
just fyi it's prob most efficient at this point on CPU to either use NNPack or MKLDNN: https://github.com/Maratyszcza/NNPACK https://github.com/szagoruyko/nnpack.torch