We are excited to expand the capabilities of the Livepeer AI Network by developing a robust object detection pipeline with multi-use applications. This project will enable near real-time tracking of objects, like a ball in live sports footage to power applications in sports analytics, security surveillance, and content moderation. By providing high-speed, accurate tracking, this solution will open up new possibilities for enhanced real-time insights across multiple industries.🏅
We are seeking community support to implement this pipeline within the Livepeer AI network. The solution will leverage model PekingU/rtdetr_r50vd for precise object tracking. By contributing to this pipeline, you will help expand our detection and analytics offerings, opening new possibilities for real-time data! 🚀
Basic understanding of Go programming and Hugging Face models is advantageous.
Bounty Requirements
Implementation: Create a functional /object-detection route and pipeline within the AI-worker repository. This new pipeline should be accessible through Docker on port 8900. Also, develop the necessary code within the go-livepeer repository to integrate access to the object-detection pipeline from the ai-worker component. This includes implementing the payment logic and job routing to ensure seamless operation within the network.
Functionality: The pipeline must accept a video file or possibly a stream to output the processed video file with labels for the detected objects and confidence scores for them per frame. It should also include the necessary post processing steps for the video to display the labels for objects in the video. Also, ensure that users can submit AI job requests to the network in a manner consistent with other AI-Network features.
Example request:
curl -X POST "https://your-gateway-url/object-detection -F pipeline="object-detection” -F model_id="" -F video_stream=@directory-path-to-video.mp4
Scope Exclusions
None
Implementation Tips
Getting started with initial pipeline structure, refer to the HuggingFace space. You can also explore the following pull requests to see how other video and image handling pipelines were implemented:
In some cases, you might encounter dependencies conflicts and not be able to integrate the new pipeline directly into the regular AI Runner. If this occurs, you can follow the approach outlined in SAM2 PR to create a custom container for the pipeline. This approach uses the regular AI runner as the base while keeping the base container lean.
To streamline development, keep these best practices in mind:
Use Developer Documentation: Leverage developer documentation for the worker and runner that provides tips for mocking pipelines and direct debugging, which can streamline the development process. Similarly, developer documentation for the installation of go-livepeer and the general Livepeer documentation for example usage and setup instructions offer valuable insights that can expedite your development process, including automatic scripts for orchestrators and gateways.
Update OpenAPI Specification: Execute the runner/gen_openapi.py script to generate an updated OpenAPI specification.
Generate Go-Livepeer Bindings: In the main repository directory, execute the make command to generate the necessary bindings, ensuring compatibility with the go-livepeer repository.
Build Binaries: Run the make command in the main repository folder to generate Livepeer binaries. This will allow you to test your implementation and ensure it integrates smoothly.
Create Docker Images: Build Docker images of Livepeer and test them using appropriate tools and settings to identify any edge cases or bugs. This step is crucial for ensuring robustness and reliability in your implementation.
How to Apply
Express Interest: Comment on this issue with a brief explanation of your experience and suitability for this task.
Await Review: Our team will review the applications and select a qualified candidate.
Get Assigned: If selected, the GitHub issue will be assigned to you.
Start Working: Begin the task! For questions or support, comment on the issue or join discussions in the #developer-lounge channel on our Discord server.
Submit Your Work: Create a pull request in the relevant repository and request a review.
Notify Us: Comment on this GitHub issue once your pull request is ready for review.
Receive Your Bounty: Upon pull request approval, we will arrange the bounty payment.
Earn Recognition: Your contribution will be highlighted in our project’s changelog.
We look forward to your interest and contributions to this exciting project! 💛
[!WARNING]
Please ensure the issue is assigned to you before starting work. To avoid duplication of efforts, unassigned issue submissions will not be accepted.
Overview
We are excited to expand the capabilities of the Livepeer AI Network by developing a robust
object detection
pipeline with multi-use applications. This project will enable near real-time tracking of objects, like a ball in live sports footage to power applications in sports analytics, security surveillance, and content moderation. By providing high-speed, accurate tracking, this solution will open up new possibilities for enhanced real-time insights across multiple industries.🏅We are seeking community support to implement this pipeline within the Livepeer AI network. The solution will leverage model PekingU/rtdetr_r50vd for precise object tracking. By contributing to this pipeline, you will help expand our detection and analytics offerings, opening new possibilities for real-time data! 🚀
Required Skillset
Bounty Requirements
Implementation: Create a functional
/object-detection
route and pipeline within the AI-worker repository. This new pipeline should be accessible through Docker on port8900
. Also, develop the necessary code within the go-livepeer repository to integrate access to theobject-detection
pipeline from theai-worker
component. This includes implementing the payment logic and job routing to ensure seamless operation within the network.Functionality: The pipeline must accept a video file or possibly a stream to output the processed video file with labels for the detected objects and confidence scores for them per frame. It should also include the necessary post processing steps for the video to display the labels for objects in the video. Also, ensure that users can submit AI job requests to the network in a manner consistent with other AI-Network features.
Scope Exclusions
Implementation Tips
Getting started with initial pipeline structure, refer to the HuggingFace space. You can also explore the following pull requests to see how other video and image handling pipelines were implemented:
In some cases, you might encounter dependencies conflicts and not be able to integrate the new pipeline directly into the regular AI Runner. If this occurs, you can follow the approach outlined in SAM2 PR to create a custom container for the pipeline. This approach uses the regular AI runner as the base while keeping the base container lean.
To streamline development, keep these best practices in mind:
runner/gen_openapi.py
script to generate an updated OpenAPI specification.make
command to generate the necessary bindings, ensuring compatibility with the go-livepeer repository.make
command in the main repository folder to generate Livepeer binaries. This will allow you to test your implementation and ensure it integrates smoothly.How to Apply
#developer-lounge
channel on our Discord server.We look forward to your interest and contributions to this exciting project! 💛