issues
search
roboflow
/
inference
A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
https://inference.roboflow.com
Other
1.38k
stars
134
forks
source link
Speed-up webrtc inference pipeline
#751
Closed
grzegorz-roboflow
closed
1 month ago
grzegorz-roboflow
commented
1 month ago
Description
Speed up webrtc inference pipeline:
replace locking with async queue
utilize grab to allow inference_pipeline to adapt fps
do not drop frames in aiortc recv event
only obtain np array in retrieve
Type of change
[x] Bug fix (non-breaking change which fixes an issue)
How has this change been tested, please provide a testcase or example of how you tested the change?
e2e test
Any specific deployment considerations
N/A
Docs
N/A
Description
Speed up webrtc inference pipeline:
Type of change
How has this change been tested, please provide a testcase or example of how you tested the change?
e2e test
Any specific deployment considerations
N/A
Docs
N/A