openvinotoolkit / model_server

A scalable inference server for models optimized with OpenVINO™
https://docs.openvino.ai/2024/ovms_what_is_openvino_model_server.html
Apache License 2.0
655 stars 206 forks source link

How to transfer the models in the `model zoo` to the configuration file and the pipelines? #2370

Closed jiekechoo closed 1 week ago

jiekechoo commented 5 months ago

I have deplyed the model server in my case, it's a wonderful project. I have a few questions about using the model server for the further applications. I want to transfer the models from model zoo to model server's configuration file and pipelines, followed the documentation https://docs.openvino.ai/2024/ovms_docs_dag.html, there are a few samples that I have tested. But how do I transfer more models? How to get inputs and set outputs from the ir xml files? Should I use third-party tools?

Anyone can help me? Thanks a lot.

atobiszei commented 5 months ago

Hi @jiekechoo I think the easiest way is just to load model in OVMS and check logs. on --log_level INFO (default) you should see lines like these:

[2024-03-18 21:53:05.366][1][modelmanager][info][modelinstance.cpp:490] Input name: input:0; mapping_name: input:0; shape: (-1,299,299,3); precision: FP32; layout: N...
[2024-03-18 21:53:05.366][1][modelmanager][info][modelinstance.cpp:542] Output name: InceptionResnetV2/AuxLogits/Logits/BiasAdd:0; mapping_name: InceptionResnetV2/AuxLogits/Logits/BiasAdd:0; shape: (-1,1001); precision: FP32; layout: N...

Alternatively you could use model metadata calls https://docs.openvino.ai/2024/ovms_docs_rest_api_kfs.html#model-metadata-api-a-name-model-metadata-a https://docs.openvino.ai/2024/ovms_docs_grpc_api_kfs.html#model-metadata-api-a-name-kfs-model-metadata-a after loading those models in OVMS.

jiekechoo commented 5 months ago

@atobiszei Thanks for your reply. As you mentioned that was useful. I can check the inputs and the outputs from the v1 or v2 rest api insted of using the DEBUG log level opts.

I need a documentation or a guide, teach me how to create a custom pipeline for multiple models.

atobiszei commented 5 months ago

Did you check examples: https://docs.openvino.ai/2024/ovms_docs_dag.html#pipelines-examples-a-name-pipeline-examples-a ?

I don't know what specific part of custom pipeline creation is ambiguous for you so I would recommend looking at: https://github.com/openvinotoolkit/model_server/blob/main/docs/demultiplexing.md#basic-demultiplexer-example-and-metadata-explanation as it shows both config.json and in graphic way model inputs outputs. Eg.

Looking at this fragment:

                    "name": "Model_B_node",
                    "model_name": "Model_B",
                    "type": "DL model",
                    "inputs": [
                        {"input_A": {"node_name": "Model_A_node",
                                     "data_item": "output_A"}},
                        {"input_B": {"node_name": "Model_A_node",
                                     "data_item": "output_B"}}
                    ], 
                    "outputs": [
                        {"data_item": "output",
                         "alias": "output"}
                    ]
                }

Node representing Model_B will receive output output_A from node Model_A_node and set it as input input_A to actual model. This is naive example when output of one model do match exactly input of another one so you may need to use additional custom nodes to do some kind of pre/post processing. There are few custom nodes included in OVMS image: https://github.com/openvinotoolkit/model_server/tree/main/src/custom_nodes

jiekechoo commented 5 months ago

@atobiszei Thanks. I got your point. I'll try your advice.

jiekechoo commented 1 week ago

It's a hard work to build custom nodes, but have to.