mlcommons / inference

Reference implementations of MLPerf™ inference benchmarks
https://mlcommons.org/en/groups/inference
Apache License 2.0
1.24k stars 536 forks source link

Do you support a local path to the model? #1711

Open sunpian1 opened 6 months ago

sunpian1 commented 6 months ago

Do you support a local path to the model?

arjunsuresh commented 6 months ago

local path to the model is supported in some cases depending on the implementation being used. For example, the bert implementation using deepsparse backend supports it. Which model would you like to import locally?

sunpian1 commented 6 months ago

resnet50 mobilenetv3 and so on

arjunsuresh commented 6 months ago

In CM, we download or load a model using a separate CM script like here. After this we get the path to the model in an ENV variable which we use in the inference implementation script.

If you try this, you can either extend the CM script to plug in new models or populate the ENV variable and prevent the run of the CM script to get the model.

This is a WIP but can be useful to understand how CM scripts work.