Open clausMeko opened 3 months ago
Hi there Could you please tell us which model you use?
Model Type: Roboflow 3.0 Instance Segmentation (Accurate)
Is that what you mean?
no, I mean what is the name and version of the model you use
You mean from my login - its not public? emerald-fbdh0/4
ok, I will take a look at metadata and try to reproduce problem on similar model to profile the server This will take some time, realistically can be done next week
Ty, In parallel you could name me a public model with some expected performance so I can see if my setup is in general as fast as expected.
ok, that would be even better
will check and send a link
that should be the model with similar characteristics: "yolov8s-seg-640"
I just checked the number from our benchmarks and last time we checked it was faster than you report, so I will redo test once you confirm this 5 FPS on public model and we will see
@PawelPeczek-Roboflow
It is still 5 FPS, i.e. 200ms/image on the the public model. I added some info.
So somehow I am stuck with 5FPS on an Orin Nano. I am glad for any ideas.
Looking forward to next week.
Cheers, Claus
➜ ~ docker run --net=host --runtime=nvidia --env INSTANCES=2 -d roboflow/roboflow-inference-server-jetson-5.1.1
2c64571536487f15d998db82ed931cc3daed943db4c8958e3e09cc9e4503f101
➜ ~ docker logs -f upbeat_hoover
UserWarning: Unable to import Axes3D. This may be due to multiple versions of Matplotlib being installed (e.g. as a system package and as a pip package). As a result, the 3D projection is not available.
SupervisionWarnings: BoundingBoxAnnotator is deprecated: `BoundingBoxAnnotator` is deprecated and has been renamed to `BoxAnnotator`. `BoundingBoxAnnotator` will be removed in supervision-0.26.0.
UserWarning: Field name "schema" in "WorkflowsBlocksSchemaDescription" shadows an attribute in parent "BaseModel"
INFO: Started server process [19]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:9001 (Press CTRL+C to quit)
INFO: 192.168.2.12:45126 - "GET /model/registry HTTP/1.1" 200 OK
UserWarning: Specified provider 'OpenVINOExecutionProvider' is not in available provider names.Available providers: 'TensorrtExecutionProvider, CUDAExecutionProvider, CPUExecutionProvider'
INFO: 192.168.2.12:45132 - "POST /model/add HTTP/1.1" 200 OK
INFO: 192.168.2.12:43590 - "POST /infer/instance_segmentation HTTP/1.1" 200 OK
INFO: 192.168.2.12:43598 - "GET /model/registry HTTP/1.1" 200 OK
INFO: 192.168.2.12:43614 - "POST /infer/instance_segmentation HTTP/1.1" 200 OK
# goes on forever....
I checked that the model ID is not arbitrary. So it uses your provided model:
# the docker container confirms a random modelID as invalided
inference.core.exceptions.InvalidModelIDError: Model ID: `Ran0mMod3lID` is invalid.
INFO: 192.168.2.12:47106 - "POST /model/add HTTP/1.1" 400 Bad Request
ok, will verify on my end and reach you back
The benchmarks you're citing are for a nano-sized object detection model vs a small-sized instance segmentation model. Should be yolov8n-640
.
@yeldarby your proposed Model yolov8n-640
runs at ~6fps on my orin nano.
@PawelPeczek-Roboflow I checked what a reduced resolution changes.
client = InferenceHTTPClient(
api_url="http://localhost:9001",
api_key=ROBOFLOW_API_KEY,
)
# 100 times less pixels
img = cv2.resize(img, (0, 0), fx = 0.1, fy = 0.1)
results = client.infer(img, model_id=ROBOFLOW_MODEL)
it doubled the frame rate to ~11fps.
I don´t know how sensible that is - just fyi.
just checking at my jetson now - my first guess was that camera may be providing high res frames, but let's see what my test shows
Ok, seems that @clausMeko is right with his results, those are benchmarks for segmentation models:
yolov8n-seg-640
python -m inference_cli.main benchmark api-speed --model_id yolov8n-seg-640 --legacy-endpoints
Loading images...: 100%|█████████████████████████████████████████████████████████| 8/8 [00:01<00:00, 4.81it/s]
Warming up API...: 100%|███████████████████████████████████████████████████████| 10/10 [00:04<00:00, 2.12it/s]
Detected images dimensions: {(612, 612), (440, 640), (427, 640), (500, 375), (334, 500), (480, 640), (375, 500)}
avg: 123.8ms | rps: 8.0 | p75: 127.4ms | p90: 158.7 | %err: 0.0 |
avg: 124.9ms | rps: 7.9 | p75: 124.0ms | p90: 156.8 | %err: 0.0 |
avg: 125.0ms | rps: 8.0 | p75: 125.9ms | p90: 156.7 | %err: 0.0 |
avg: 121.7ms | rps: 8.3 | p75: 128.5ms | p90: 157.8 | %err: 0.0 |
avg: 122.4ms | rps: 8.2 | p75: 133.0ms | p90: 164.8 | %err: 0.0 |
avg: 121.6ms | rps: 8.2 | p75: 133.3ms | p90: 162.2 | %err: 0.0 |
avg: 122.4ms | rps: 8.2 | p75: 134.8ms | p90: 159.4 | %err: 0.0 |
avg: 120.5ms | rps: 8.4 | p75: 132.4ms | p90: 156.4 | %err: 0.0 |
yolov8s-seg-640
python -m inference_cli.main benchmark api-speed --model_id yolov8s-seg-640 --legacy-endpoints
Loading images...: 100%|█████████████████████████████████████████████████████████| 8/8 [00:01<00:00, 4.37it/s]
Warming up API...: 100%|███████████████████████████████████████████████████████| 10/10 [00:06<00:00, 1.67it/s]
Detected images dimensions: {(612, 612), (440, 640), (427, 640), (500, 375), (334, 500), (480, 640), (375, 500)}
avg: 155.5ms | rps: 6.4 | p75: 171.1ms | p90: 210.7 | %err: 0.0 |
avg: 153.7ms | rps: 6.4 | p75: 174.5ms | p90: 197.3 | %err: 0.0 |
avg: 152.1ms | rps: 6.5 | p75: 173.5ms | p90: 194.8 | %err: 0.0 |
avg: 149.9ms | rps: 6.7 | p75: 175.4ms | p90: 192.1 | %err: 0.0 |
avg: 146.5ms | rps: 6.9 | p75: 167.9ms | p90: 184.5 | %err: 0.0 |
avg: 149.2ms | rps: 6.7 | p75: 174.8ms | p90: 186.9 | %err: 0.0 |
avg: 153.1ms | rps: 6.5 | p75: 172.0ms | p90: 186.7 | %err: 0.0 |
Docs are probably referring to object detection models which looks like that:
yolov8n-640
python -m inference_cli.main benchmark api-speed --model_id yolov8n-640 --legacy-endpoints
Loading images...: 100%|█████████████████████████████████████████████████████████| 8/8 [00:01<00:00, 5.07it/s]
Warming up API...: 100%|███████████████████████████████████████████████████████| 10/10 [00:00<00:00, 13.53it/s]
Detected images dimensions: {(612, 612), (440, 640), (427, 640), (500, 375), (334, 500), (480, 640), (375, 500)}
avg: 67.7ms | rps: 14.6 | p75: 69.5ms | p90: 76.2 | %err: 0.0 |
avg: 66.0ms | rps: 15.3 | p75: 67.3ms | p90: 74.1 | %err: 0.0 |
avg: 66.7ms | rps: 15.0 | p75: 67.9ms | p90: 75.6 | %err: 0.0 |
avg: 66.8ms | rps: 15.0 | p75: 69.0ms | p90: 76.4 | %err: 0.0 |
avg: 65.9ms | rps: 15.2 | p75: 67.9ms | p90: 75.5 | %err: 0.0 |
avg: 66.4ms | rps: 15.2 | p75: 68.5ms | p90: 75.7 | %err: 0.0 |
avg: 67.2ms | rps: 14.9 | p75: 69.9ms | p90: 75.9 | %err: 0.0 |
avg: 66.4ms | rps: 15.1 | p75: 68.0ms | p90: 75.1 | %err: 0.0 |
avg: 66.6ms | rps: 15.0 | p75: 67.8ms | p90: 75.8 | %err: 0.0 |
avg: 67.6ms | rps: 14.8 | p75: 71.3ms | p90: 76.3 | %err: 0.0 |
yolov8s-640
python -m inference_cli.main benchmark api-speed --model_id yolov8s-640 --legacy-endpoints
Loading images...: 100%|█████████████████████████████████████████████████████████| 8/8 [00:01<00:00, 4.98it/s]
Warming up API...: 100%|███████████████████████████████████████████████████████| 10/10 [00:05<00:00, 1.82it/s]
Detected images dimensions: {(612, 612), (440, 640), (427, 640), (500, 375), (334, 500), (480, 640), (375, 500)}
avg: 85.6ms | rps: 11.6 | p75: 86.1ms | p90: 94.5 | %err: 0.0 |
avg: 86.1ms | rps: 11.7 | p75: 88.6ms | p90: 96.0 | %err: 0.0 |
avg: 86.5ms | rps: 11.6 | p75: 89.1ms | p90: 96.0 | %err: 0.0 |
avg: 87.1ms | rps: 11.5 | p75: 90.0ms | p90: 96.0 | %err: 0.0 |
avg: 86.9ms | rps: 11.6 | p75: 89.6ms | p90: 96.1 | %err: 0.0 |
avg: 86.2ms | rps: 11.7 | p75: 87.0ms | p90: 95.9 | %err: 0.0 |
avg: 86.7ms | rps: 11.5 | p75: 90.2ms | p90: 96.1 | %err: 0.0 |
avg: 87.5ms | rps: 11.4 | p75: 91.8ms | p90: 96.3 | %err: 0.0 |
avg: 89.7ms | rps: 11.1 | p75: 95.5ms | p90: 102.3 | %err: 0.0 |
@PawelPeczek-Roboflow so you would recommend choosing object detection over segmentation models if it is about performance?
That really depends on your use case - some tasks would be possible to be performed by both types of models, some not. Additionally (for on-line video processing) - given that you process video skipping frames you are not able to process with you model, some applications will be possible even if you process 5, 10 or 15 fps but not each and every frame of the stream. It usually depends on camera POV regarding observed area (how close to the objects of interest camera is) and speed of observed objects. Happy to help if I know more details
@PawelPeczek-Roboflow I would like to use concurrency for inference. I.e. if a request takes ~100ms then I could do 3 requests every 33ms etc.
Do you you have a python code-snippet to do that? I saw your envVar INSTANCES=3
which probably only allows for parallel access but no speed up.
Sorry for late response,
I believe we do not have script to distribute requests. Cannot really find this INSTANCES
env var in the codebase now
Hi @clausMeko, I looked at your example code and thought maybe you would like to try inference_pipeline
? I prepared rough example for you, it needs some more love as I don't have your setup on my desk but please feel free to ask questions if you need help with it. I'm curious if you'd be satisfied with result. (ps. I plugged nano model, you might want to test on your target model)
from functools import partial
from typing import Any, Dict, Optional, Tuple, Union
import cv2 as cv
from inference.core.interfaces.camera.entities import (
SourceProperties,
VideoFrame,
VideoFrameProducer,
)
import numpy as np
from pypylon import pylon
import supervision as sv
from inference.core.interfaces.stream.inference_pipeline import InferencePipeline
class BaslerFrameProducer(VideoFrameProducer):
def __init__(
self,
):
self._camera: pylon.InstantCamera = pylon.InstantCamera(pylon.TlFactory.GetInstance().CreateFirstDevice())
self._camera.StartGrabbing(pylon.GrabStrategy_LatestImageOnly)
self._converter = pylon.ImageFormatConverter()
def grab(self) -> bool:
grabResult = self._camera.RetrieveResult(5000, pylon.TimeoutHandling_ThrowException)
grab_succeeded = False
if grabResult.GrabSucceeded():
grab_succeeded = True
grabResult.Release()
return grab_succeeded
def retrieve(self) -> Tuple[bool, Optional[np.ndarray]]:
grabResult = self._camera.RetrieveResult(5000, pylon.TimeoutHandling_ThrowException)
if not grabResult.GrabSucceeded():
grabResult.Release()
return False, None
image = self._converter.Convert(grabResult)
img = image.GetArray()
grabResult.Release()
return True, img
def release(self):
self._is_opened = False
def isOpened(self) -> bool:
return self._camera.IsGrabbing()
def discover_source_properties(self) -> SourceProperties:
grabResult = self._camera.RetrieveResult(5000, pylon.TimeoutHandling_ThrowException)
if not grabResult.GrabSucceeded():
grabResult.Release()
return False, None
image = self._converter.Convert(grabResult)
img = image.GetArray()
grabResult.Release()
h, w, *_ = img.shape
return SourceProperties(
width=w,
height=h,
total_frames=-1,
is_file=False,
fps=1, # TODO: can't see FPS in pylon.InstantCamera, we measure FPS in inference pipeline and expose as Frame property
is_reconnectable=False,
)
def initialize_source_properties(self, properties: Dict[str, float]):
pass
basler_producer = partial(
BaslerFrameProducer,
)
box_annotator = sv.BoundingBoxAnnotator()
label_annotator = sv.LabelAnnotator()
def custom_sink(prediction: Dict[str, Any], video_frame: VideoFrame) -> None:
detections = sv.Detections.from_inference(prediction)
labels = [f"#{class_name}" for class_name in detections["class_name"]]
annotated_frame = box_annotator.annotate(
video_frame.image.copy(), detections=detections
)
annotated_frame = label_annotator.annotate(
annotated_frame, detections=detections, labels=labels
)
cv.imshow("", annotated_frame)
cv.waitKey(1)
inference_pipeline = InferencePipeline.init(
video_reference=basler_producer,
model_id="yolov8n-640",
on_prediction=custom_sink,
)
inference_pipeline.start()
inference_pipeline.join()
@grzegorz-roboflow We bought a RTX4090 to have higher frame rates.
So currently no priority but I would like to try it when I have some time spare.
Search before asking
Bug
Set Up
I use a Basler Camera acA1920-40uc.
It provides ~50 fps as
cv2.Image
via opencv. I use your sdk to post those images.On the same device: jetson orin nano (no network latency) docker runs the
inference-server-jets-5.1.1
image.For testing I ran the same setup on my notebook(dell precision 5570 - i7-12700H) with the cpu image.
Problem
The inference takes longer than expected ~200ms (self computed ~ 5 fps). This is disappointing for 2 reasons:
Question
Is there anything I am not considering so I can improve my performance?
Environment
roboflow/roboflow-inference-server-jetson-5.1.1:latest
Minimal Reproducible Example
Sorry - I merged 2 files if something seems odd.
Additional
No response
Are you willing to submit a PR?