deepjavalibrary / djl

An Engine-Agnostic Deep Learning Framework in Java
https://djl.ai
Apache License 2.0
4.08k stars 650 forks source link

Provide a way to load the model from a byte array or InputStream #2950

Closed StefanOltmann closed 8 months ago

StefanOltmann commented 8 months ago

I found that optModelPath() does not work with UNC paths - so if the user of my app installs to a network drive, I have a problem.

Providing the model bytes directly via InputStream or a byte array would be a good option. I could directly load the model from class.getResourceAsStream() to avoid a lot of issues.

frankfliu commented 8 months ago
  1. You can load model from your jar file and class path, .optModelUrls("jar:///my_model.jar"), you have to put your model in a .zip file
  2. If you are using OnnxRuntime or PyTorch, you can use Model.laod(InputStream) api to load from InputStream:
try (Model model = Model.newInstance("resnet18", "PyTorch")) {
    model.load(class.getResourceAsStream("..."));
    ...
}
StefanOltmann commented 8 months ago

Thank you for your prompt response! :)

Regarding the first option, wouldn't specifying a path to the JAR file still necessitate an absolute path? Since it's later initiated from an EXE and the working directory could be anything, which is beyond my control. However, I do have a system property ("compose.application.resources.dir") that indicates the file's location. Unfortunately, this can be a UNC path.

As for the second approach, it's quite intriguing. I'm attempting to incorporate the RetinaFace detection sample code into my photo app for face detection.

Could you provide guidance on how the official sample should be modified to employ the model.load() approach? How to apply the FaceDetectionTranslator here?

https://github.com/deepjavalibrary/djl/blob/1545c09aced57a7d39217a9aa19c4892aeb6b201/examples/src/main/java/ai/djl/examples/inference/face/RetinaFaceDetection.java#L64-L73

frankfliu commented 8 months ago

@StefanOltmann The jar url is not to the jar file, It the file in the classpath (it doesn't really need a jar, any files in the classpath should be fine): The jar:///ai/djl/utils/model.zip equals:

ai.djl.util.Utils.class.getResource("model.zip")

file URL should also work: file:///Users/home/model/model.zip

frankfliu commented 8 months ago

Translator is not involved in model loading. Currently Criteria API cannot handle Streaming model loading (It's possible, but need major refactor). The main reason is because, a model usually contains multiple files, We don't have a good way to streaming in multiple files. API doesn't limit to .zip file, but underlying implementation only access .zip in many cases.

Can you provide more context why you need a use InputStream?

StefanOltmann commented 8 months ago

Can you provide more context why you need a use InputStream?

I want to include RetinaFaceDetector into Ashampoo Photos, a JVM based Desktop app. It should come with the model and engine included, so that it can be installed and be used offline without any downloading of missing resources from the web.

People might put the installation, which has a resource directory including „retinaface.pt“ onto a network drive. So the model should also be loaded from there.

Loading resources using class.getResourceAsStream() (like icons, etc.) is the most reliable way to load resources.

This just has been proven true as loading from a UNC path using the existing API fails.

It must not be a stream. If I can give the whole model as byte array, this would help, too. I assume that Criteria API also reads the whole file bytes behind the scenes.

Can you give me a optModelFromBytes() ?

frankfliu commented 8 months ago

If you only need offline distribution of your application. use jar:/// is sufficient.

  1. put your retinaface.pt (and synset.txt, serving.properties files) into a .zip file
  2. add the .zip file and put it into your distribution .jar file (e.g. /META-INF/models/retinaface.zip)
  3. use `.optModelUrls("jar:/META-INF/models/retinaface.zip") to load the model
frankfliu commented 8 months ago

By the way, you might want to take a look this demo if you want to work without network, see: https://github.com/deepjavalibrary/djl-demo/tree/master/development/fatjar

StefanOltmann commented 8 months ago

Okay, I will test it and report back. Thank you.

StefanOltmann commented 8 months ago

Where do I get synset.txt and serving.properties from?

frankfliu commented 8 months ago

synset.txt and serving.properties files are optional:

  1. synset.txt usually used by ImageClassificationTranslator, you might not need for your case
  2. serving.properties allows you adding arguments and options that will be used for load the model and translator. This makes it easy to distribute the .zip file. With serving.properties, you don't need to add them in Critieria
    1. you can specify which engine to use: engine=PyTorch
    2. you can specify which tralsator/translatorFactory use use (can be override by .optTranslatorFactory()): transaltorFactory=ai.djl.pytorch.zoo.nlp.qa.PtBertQATranslatorFactory
    3. You can add arguments that needed by Translator: width=640

See: https://docs.djl.ai/master/docs/serving/serving/docs/configurations_model.html

StefanOltmann commented 8 months ago

The jar:/// indeed works from a network drive. Great.