Closed StefanOltmann closed 8 months ago
.optModelUrls("jar:///my_model.jar")
, you have to put your model in a .zip fileModel.laod(InputStream)
api to load from InputStream:try (Model model = Model.newInstance("resnet18", "PyTorch")) {
model.load(class.getResourceAsStream("..."));
...
}
Thank you for your prompt response! :)
Regarding the first option, wouldn't specifying a path to the JAR file still necessitate an absolute path? Since it's later initiated from an EXE and the working directory could be anything, which is beyond my control. However, I do have a system property ("compose.application.resources.dir") that indicates the file's location. Unfortunately, this can be a UNC path.
As for the second approach, it's quite intriguing. I'm attempting to incorporate the RetinaFace detection sample code into my photo app for face detection.
Could you provide guidance on how the official sample should be modified to employ the model.load() approach? How to apply the FaceDetectionTranslator
here?
@StefanOltmann
The jar url is not to the jar file, It the file in the classpath (it doesn't really need a jar, any files in the classpath should be fine):
The jar:///ai/djl/utils/model.zip
equals:
ai.djl.util.Utils.class.getResource("model.zip")
file URL should also work: file:///Users/home/model/model.zip
Translator
is not involved in model loading. Currently Criteria API cannot handle Streaming model loading (It's possible, but need major refactor). The main reason is because, a model usually contains multiple files, We don't have a good way to streaming in multiple files. API doesn't limit to .zip
file, but underlying implementation only access .zip
in many cases.
Can you provide more context why you need a use InputStream?
Can you provide more context why you need a use InputStream?
I want to include RetinaFaceDetector into Ashampoo Photos, a JVM based Desktop app. It should come with the model and engine included, so that it can be installed and be used offline without any downloading of missing resources from the web.
People might put the installation, which has a resource directory including „retinaface.pt“ onto a network drive. So the model should also be loaded from there.
Loading resources using class.getResourceAsStream() (like icons, etc.) is the most reliable way to load resources.
This just has been proven true as loading from a UNC path using the existing API fails.
It must not be a stream. If I can give the whole model as byte array, this would help, too. I assume that Criteria API also reads the whole file bytes behind the scenes.
Can you give me a optModelFromBytes() ?
If you only need offline distribution of your application. use jar:///
is sufficient.
By the way, you might want to take a look this demo if you want to work without network, see: https://github.com/deepjavalibrary/djl-demo/tree/master/development/fatjar
Okay, I will test it and report back. Thank you.
Where do I get synset.txt and serving.properties from?
synset.txt and serving.properties files are optional:
synset.txt
usually used by ImageClassificationTranslator, you might not need for your caseserving.properties
allows you adding arguments and options that will be used for load the model and translator. This makes it easy to distribute the .zip file. With serving.properties
, you don't need to add them in Critieria
engine=PyTorch
.optTranslatorFactory()
): transaltorFactory=ai.djl.pytorch.zoo.nlp.qa.PtBertQATranslatorFactory
width=640
See: https://docs.djl.ai/master/docs/serving/serving/docs/configurations_model.html
The jar:///
indeed works from a network drive. Great.
I found that
optModelPath()
does not work with UNC paths - so if the user of my app installs to a network drive, I have a problem.Providing the model bytes directly via
InputStream
or a byte array would be a good option. I could directly load the model fromclass.getResourceAsStream()
to avoid a lot of issues.