Closed javst closed 2 years ago
If you are using gradle or maven, you only need to include pytorch-native-cu113
package as your dependency for your target os, and then include this jar in your distribution.
for example:
<dependency>
<groupId>ai.djl.pytorch</groupId>
<artifactId>pytorch-native-cu113</artifactId>
<classifier>linux-x86_64</classifier>
<version>1.10.0</version>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>ai.djl.pytorch</groupId>
<artifactId>pytorch-jni</artifactId>
<version>1.10.0-0.15.0</version>
<scope>runtime</scope>
</dependency>
see: https://docs.djl.ai/master/engines/pytorch/pytorch-engine/index.html#linux-gpu
Hi @frankfliu,
Is this also the case for unmanaged dependencies? I tried to put the jars for pytorch-jni
and pytorch-native:linux-x86_64
in a lib/
folder of a java / sbt project but it is still downloading the libraries.
@DevinTDHa
Are you trying to create a fat jar? Or are see download happen during build time?
@frankfliu
Yes I plan to, but I am seeing it while I am running the tests in my project. The lib folder has the api, jni, pytorch engine and the native library jars for my platform.
@DevinTDHa
I created a demo project that show you how to create a fatjar that doesn't download dependencies at runtime: https://github.com/deepjavalibrary/djl-demo/tree/master/developement/fatjar
You will see that, it only extract pytorch native libraries from the jar file:
Loading: 100% |████████████████████████████████████████|
[sbt-bg-threads-1] INFO ai.djl.pytorch.jni.LibUtils - Extracting /native/lib/libc10.dylib to cache ...
[sbt-bg-threads-1] INFO ai.djl.pytorch.jni.LibUtils - Extracting /native/lib/libiomp5.dylib to cache ...
[sbt-bg-threads-1] INFO ai.djl.pytorch.jni.LibUtils - Extracting /native/lib/libtorch.dylib to cache ...
[sbt-bg-threads-1] INFO ai.djl.pytorch.jni.LibUtils - Extracting /native/lib/libtorch_cpu.dylib to cache ...
[sbt-bg-threads-1] INFO ai.djl.pytorch.jni.LibUtils - Downloading jni https://publish.djl.ai/pytorch/1.10.0/jnilib/0.16.0/osx-x86_64/cpu/libdjl_torch.dylib to cache ...
[sbt-bg-threads-1] INFO ai.djl.pytorch.engine.PtEngine - Number of inter-op threads is 6
[sbt-bg-threads-1] INFO ai.djl.pytorch.engine.PtEngine - Number of intra-op threads is 6
[
class: "n02123045 tabby, tabby cat", probability: 0.45637
class: "n02123159 tiger cat", probability: 0.32963
class: "n02124075 Egyptian cat", probability: 0.18387
class: "n02127052 lynx, catamount", probability: 0.01527
class: "n02123394 Persian cat", probability: 0.00487
]
If I want the same but with a cpu computer do I just need pytorch-jni
and pytorch-native-cpu
? I have both but the offline computer still attempts to download pytorch from DJL
@brettfazio Can you share your project? Can you enable log? You should be able to see debug log related to downloading the native library. Can you try the demo fatjar project on your machine and see if it still download pytorch?
It's an offline computer, I'd like to have it such that no libraries or files are downloaded at runtime. Either by using pytorch-native-cpu
or mapping to my own installation of pytorch as indicated here LibUtils#L45.
Is there a guide for doing either of those?
@brettfazio the fatjar project is to build offline distribution package. And it should be the easiest way. You only need copy the fatjar to your offline machine and run the fatjar.
A few other way to achieve the same if want:
PYTORCH_LIBRARY_PATH
and PYTORCH_VERSION
environment var, but you still need either bundle pytorch-jni
and manually copy the jni shared library.With your option 2 - if I point PYTORCH_LIBRARY_PATH
and PYTORCH_VERSION
environment vars properly and I have pytorch-jni
as a dependency (I already do) - then that should be enough for it to just use the pytorch that the PYTORCH_LIBRARY_PATH
points to instead of downloading? That seems ideal to me, I already have pytorch installed.
@brettfazio
Use the python PyTorch install has many restrictions:
PYTORCH_PRECXX11=true
+cpu
only pip package, you need to explicitly set the PYTORCH_LIBRARY_PATH to libtroch_cpu.so
@brettfazio You can take a look this docker file:
https://github.com/deepjavalibrary/djl-serving/blob/master/serving/docker/inf1.Dockerfile#L24-L38
pip3 install torch==1.10.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
I'm using the latest JNI 1z10z0z0z16z0
- where can I find what pytorch the JNI is compiled against?
You can find all supported JNI: https://search.maven.org/artifact/ai.djl.pytorch/pytorch-jni/1.10.0-0.16.0/jar
So setting something like
PYTORCH_LIBRARY_PATH = .../python3.9/torch/lib/libtorch_cpu.so
PYTORCH_PRECXX11= true
PYTORCH_VERSION = 1.10.0
Should be sufficient to get it to use the local cpu.so rather than downloading?
I don't think with the current code its even possible to map directly to the _cpu.so ? Because it only looks up the .so
@brettfazio
You are right, it's broken. I will raise a PR to fix it.
Thanks @frankfliu !
Additionally,
When I have djl api, djl.pytorch pytorch engine, djl.pytorch pytorch jni, and djl.pytorch pytorch native cpu as dependencies and I set the following env variables:
PYTORCH_LIBRARY_PATH = .../python3.9/torch/lib/libtorch.so
PYTORCH_PRECXX11= true
PYTORCH_VERSION = 1.10.0
It still doesn't load & attempt to download pytorch. Is that because I only have pytorch-native-cpu as a dependency and not the auto or cu dependencies? And perhaps if the _cpu.so could be loaded this could be fixed?
If I use one of the Deep Java Library (DJL) provided PyTorch native library binary distribution
- why does DJL not just use that as the pytorch binary? e.g. pytorch-native-cu101
pytorch-native-cpu
etc.
pytorch-native-cpu
is sufficient. but make sure you are add classifier
, if you didn't specify classifier
, DJL assume it's generic for all platform, it will download at runtime.
When you said it still download. would please show me the log which file is get downloaded? You should see something like:
[INFO ] - Downloading https://publish.djl.ai/pytorch/1.10.0/cpu/osx-x86_64/native/lib/libtorch_cpu.dylib.gz ...
[INFO ] - Downloading https://publish.djl.ai/pytorch/1.10.0/cpu/osx-x86_64/native/lib/libiomp5.dylib.gz ...
[INFO ] - Downloading https://publish.djl.ai/pytorch/1.10.0/cpu/osx-x86_64/native/lib/libc10.dylib.gz ...
[INFO ] - Downloading https://publish.djl.ai/pytorch/1.10.0/cpu/osx-x86_64/native/lib/libtorch.dylib.gz ...
[INFO ] - Downloading jni https://publish.djl.ai/pytorch/1.10.0/jnilib/0.16.0/osx-x86_64/cpu/libdjl_torch.dylib to cache ...
@brettfazio
I created a PR that allows you load libtorch from default pip package: https://github.com/deepjavalibrary/djl/pull/1577
PYTORCH_LIBRARY_PATH = .../python3.9/torch/lib
PYTORCH_VERSION = 1.10.0
Free free to reopen this issue if you still have question
i have cuda service computer but it is a offline service computer,so i need to download the libray then upload to my service