roboflow / inference

A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
https://inference.roboflow.com
Other
1.15k stars 85 forks source link

Run Multiple Models all together. #365

Closed amankumarchagti closed 2 months ago

amankumarchagti commented 2 months ago

Search before asking

Question

Hi, how can I run multiple models together in following code? The model are public models on roboflow, not created by me.

Additional

No response

grzegorz-roboflow commented 2 months ago

Hi @amankumarchagti , when you mention following code do you mean code snippet you was intending to include in the question?

amankumarchagti commented 2 months ago

Hi @grzegorz-roboflow , apologies for that. Following is the code:

# import the InferencePipeline interface
from inference import InferencePipeline
# import a built in sink called render_boxes (sinks are the logic that happens after inference)
from inference.core.interfaces.stream.sinks import render_boxes

# create an inference pipeline object
pipeline = InferencePipeline.init(
    model_id="cow-lie-stand-walk/2", # set the model id to a yolov8x model with in put size 1280
    video_reference="rtsp://192.168.1.100:5543/live/channel0", # set the video reference (source of video), it can be a link/path to a video file, an RTSP stream url, or an integer representing a device id (usually 0 for built in webcams)
    on_prediction=render_boxes, # tell the pipeline object what to do with each set of inference by passing a function
    api_key="<API-KEY>", # provide your roboflow api key for loading models from the roboflow api
)
# start the pipeline
pipeline.start()
# wait for the pipeline to finish
pipeline.join()
grzegorz-roboflow commented 2 months ago

Hi @amankumarchagti, we are finalizing release that will include major refactor of workflows - this functionality will enable you to run multiple models in single pipeline.

If you want to play with it you can check https://github.com/roboflow/inference/pull/343

With this PR below can be done:

import os

from inference.enterprise.workflows.execution_engine.core import ExecutionEngine
from inference.core.managers.base import ModelManager
from inference.core.registries.roboflow import (
    RoboflowModelRegistry,
)
from inference.models.utils import ROBOFLOW_MODEL_TYPES

model_registry = RoboflowModelRegistry(ROBOFLOW_MODEL_TYPES)
model_manager = ModelManager(model_registry=model_registry)

WORKFLOW = {
        "version": "1.0",
        "inputs": [
            { "type": "InferenceImage", "name": "image"},
        ],
        "steps": [
            {
                "type": "ObjectDetectionModel",
                "name": "m1",
                "image": "$inputs.image",
                "model_id": "chess-pieces-and-chess-board-instance-segmentation/1",
            },
            {
                "type": "ObjectDetectionModel",
                "name": "m2",
                "image": "$inputs.image",
                "model_id": "chess-pieces-and-chess-board-instance-segmentation/1",
            }
        ],
        "outputs": [
            { "type": "JsonField", "name": "m1preds", "selector": "$steps.m1.predictions" },
            { "type": "JsonField", "name": "m2preds", "selector": "$steps.m2.predictions" }
        ]
    }

execution_engine = ExecutionEngine.init(
    workflow_definition=WORKFLOW,
    init_parameters={
        "workflows_core.model_manager": model_manager,
        "api_key": os.getenv("ROBOFLOW_API_KEY")
    },
)

result = execution_engine.run(
    runtime_parameters={
        "image": {"type": "file", "value": "/path/to/image.jpg"},
        "confidence": 0.8,
    }
)

print(result.keys())
# ['m1preds', 'm2preds']

Hope this answers your question.

amankumarchagti commented 2 months ago

I think the release is done right?

grzegorz-roboflow commented 2 months ago

Absolutely, the code from above example will now work on the main branch. Bare in mind I was testing with my model I trained on my personal Roboflow account, you will probably need to use your models.

grzegorz-roboflow commented 2 months ago

@amankumarchagti, I hope you managed to achieve your use case with workflows. I will close this issue, please feel free to create new issue if you have further questions about workflows.

john09282922 commented 1 month ago

Hi, I also have an important question Instead of roboflow model, Is it possible to use my model like I pretrained model with yolov8 or 9 or 10?

Thanks, Jungmin

grzegorz-roboflow commented 1 month ago

Hi @john09282922 , currently in order to use your own model you need to create a dataset in Roboflow app workspace, once dataset is created you are presented with option Custom Train and Upload which allows you to upload your weights.

john09282922 commented 1 month ago

Hi @john09282922 , currently in order to use your own model you need to create a dataset in Roboflow app workspace, once dataset is created you are presented with option Custom Train and Upload which allows you to upload your weights.

Thanks for the information, Instead of the method you mentioned, is it possible to use my model? Because my model has several models with trained lots of dataset.

yeldarby commented 1 month ago

Yes, you can implement your own registry to load your model from elsewhere.

Note that it will need to conform to the same spec (or you will need to implement your own model class as well).