uncomplicate / deep-diamond

A fast Clojure Tensor & Deep Learning library
https://aiprobook.com
Eclipse Public License 1.0
432 stars 17 forks source link

Getting deep-diamond to work on Windows #20

Closed BorisVSchmid closed 1 year ago

BorisVSchmid commented 1 year ago

When using MKL on windows with Neanderthal, we are currently stuck on using the Intel MKL from 2020.3-1.5.4. That works fine for Neanderthal, and you can include the MKL library by simply adding org.bytedeco/mkl-platform-redist {:mvn/version "2020.3-1.5.4"} to your deps.edn.

But that approach doesnt work for deep-diamond. Luckily, Dragan knew a workaround. Here are the steps.

  1. Make a "mkl/" directory somewhere in Windows, and add it to your PATH.

  2. Go to your .m2 directory, and browse to C:.m2\org\bytedeco\mkl\2020.3-1.5.4\mkl-2020.3-1.5.4-windows-x86_64-redist.jar"

  3. Rename the .jar file to .zip, open it, and place the .dll files in it in your "mkl/" directory.

  4. Do the same for "C:.m2\org\jcuda\jcudnn-natives\11.7.0\jcudnn-natives-11.7.0-windows-x86_64.jar"

  5. Go to https://developer.nvidia.com/cudnn (you have to create an account on nvidia), and click Download cuDNN

  6. Click Agree, and navigate to the Archived cuDNN Releases

  7. Pick the Windows .zip file of cuDNN v8.5.0 (August 8th, 2022), for CUDA 11.x

  8. Download https://developer.nvidia.com/compute/cudnn/secure/8.5.0/local_installers/11.7/cudnn-windows-x86_64-8.5.0.96_cuda11-archive.zip

  9. Extract the .dll files from it and put them in your "mkl/" directory.

  10. The prerequisite zlib dll link on the nvidia cudnn site is broken. You can get a copy here: http://s3.amazonaws.com/ossci-windows/zlib123dllx64.zip . Again, put the dll in the "mkl/" directory.

You are done! All the dependencies you need in your project deps.edn are:

 :deps {org.clojure/clojure {:mvn/version "1.11.1"}
        uncomplicate/deep-diamond {:mvn/version "0.25.0"}
blueberry commented 1 year ago

The latest version 0.26.0 fixes this. Its Neanderthal dependency (0.46.0) now relies on MKL 2022.2 (on Windows and Linux) so you can either use the compatible bytedeco mkl (2022.2-1.5.8), or provide MKL 2022.2 binaries on your path.

The same release supports the latest CUDA 11.8.