Closed MMelQin closed 2 years ago
Both existing and new (MONAI Bundle) inference operators have been enhanced to make use of and request uniquely named models from the App execution context.
Applications, namely the Spleen Seg and Liver and Tumor Seg, have been test with loading multiple models, in a defined folder structure, and the inference operator requesting named model, with success.
An example containing multiple inference operators each using a different model in the app context will be provided in later releases, once the models are ready, e.g. segmentation followed by classification in series, or multi-(model)-AI with each consuming the same input image.
Is it possible to provide an example on loading multiple models into one monai deploy app. I don't think any of the current demo apps included this.
Hi @linhandev yes we will work on such an example thabks for pointing out
Is it possible to provide an example on loading multiple models into one monai deploy app. I don't think any of the current demo apps included this.
@linhandev Thanks for the question.
Yes, I'm planning to do a good example with, e.g. Seg and Classification models, though I have not gotten a good set from the MONAI Model Zoo yet. I can potentially have an app with both the existing Liver Tumor as well as the Spleen Seg, a mixture of plain TorchScript and MONAI Bundle compliant TorchScript, but I need to first tweak the DICOMSeg writer to save DICOM Seg instance file with Series instance UID as the unique file name.
In the meantime, one can already provide multiple models in an app, with the model files in a defined folder structure, as shown in the example below, which has a model identified by the name spleen_model
, and another by liver_tumore_model
while the path to folder app_models
is used as arg value for -model
on the CLI commands.
app_models
├── liver_tumor_model
│ └── model.ts
└── spleen_model
└── model.ts
and to access the model from within the app, the model_name
arg is used to pass the model name to the Bundle Inference operator and/or the base Segmentation Inference operator constructor, e.g.
bundle_spleen_seg_op = MonaiBundleInferenceOperator(
input_mapping=[IOMapping("image", Image, IOType.IN_MEMORY)],
output_mapping=[IOMapping("pred", Image, IOType.IN_MEMORY)],
model_name="spleen_model",
)
Hope this helps.
@linhandev I have created a WIP pull request demonstrating the use of multiple models within the same app. It is WIP for a couple reasons, one being that one of the MONAI Bundle TorchScripts fails to load, and fails even just with plainly torch.jit.load() on its own, see issue created for the Model Zoo.
Is your feature request related to a problem? Please describe. There are cases where multiple AI models are needed in the same application to provide the final inference result, typically one model will provide the image ROI for another model, for example,
The ROI image can be generated using non-DL computer vision based algorithm, but it is becoming common with DL models.
Describe the solution you'd like
operators
, e.g. multiple inference operators each supporting a specific named modelAlternative Solution
Additional context App SDK standardizes the in-memory image representation, ensuring consistency and correctness in passing image objects among operators within the same app #238