FluxML / NNlib.jl

Neural Network primitives with multiple backends
Other
203 stars 121 forks source link

Use oneDNN #74

Open vchuravy opened 6 years ago

vchuravy commented 6 years ago

For improved CPU performance it would be grand if we could use (optionally) the (open-source) MKL-DNN library https://github.com/intel/mkl-dnn

jekbradbury commented 6 years ago

Unfortunately while MKL-DNN is OSS, it depends on the closed-source MKL (rather than using a generic BLAS interface). So it would be harder to integrate with than NNPACK, which (I think) provides similar speedups in many cases.

vchuravy commented 6 years ago

There is a build flag to turn off MKL usage: MKLDNN_USE_MKL But yes NNPACK would also be a good alternative.

MikeInnes commented 6 years ago

We are actively working on NNPACK in #67. My main issue with MKL-DNN is that it seems to work best if you build its computational graph thing, rather than exposing a simple CUDNN-style conv kernel. This is no expert opinion though, so if someone can hack up the right set of commands I'm on board with it.

See also https://github.com/FluxML/Flux.jl/issues/157

RoyiAvital commented 5 years ago

Unfortunately while MKL-DNN is OSS, it depends on the closed-source MKL (rather than using a generic BLAS interface). So it would be harder to integrate with than NNPACK, which (I think) provides similar speedups in many cases.

I think it doesn't have the MKL dependency anymore.

ToucheSir commented 2 years ago

I'm bumping this since there is now an in-progress PR for adding oneDNN to BinaryBuilder: https://github.com/JuliaPackaging/Yggdrasil/pull/4550. The confluence of NNPACK being unmaintained, NNlib having dropped NNPACK and us not having much capacity to maintain kernels means that this is once again an attractive proposition.