Closed EnfangCui closed 3 years ago
Hi, Enfang, i think the concepts of 'client' and 'server' is too extensive, services and modules can not be distinguished.
in my opinion, because we need a edge-edge collaborative inference, the definition of fully distributed may be more appropriate. for example, we can replace 'clientWorkers' and 'serverWorkers' with 'inferenceWokers', and we can add a key words, such as 'collaborative algorithm‘,etc.
It will be better to summit a proposal, we can comment it. right?
It will be better to summit a proposal, we can comment it. right?
Ok, I will submit a proposal tomorrow.
/close I close this due to this work has been transfered to the work of object search and tracking, see this pr #100 for more detail.
Feel free to reopen it.
@llhuii: Closing this issue.
Motivation
Multi-edge collaborative inference refers to the use of multiple edge computing nodes for collaborative inference. This technology can make full use of the distributed computing resources of edge computing nodes, reduce the delay of edge AI services and improve inference accuracy and throughput. It is a key technology of edge intelligence Therefore, we propose a multi-edge collaborative inference framework to help users build multi-edge collaborative AI business easily based on KubeEdge.
Goals
Solution
Take pedestrian ReID as a example:
The client is used to read the camera video data and carry out the first stage of inference. The server is used to receive the region proposal predicted by the client for final target detection and pedestrian feature matching.
Open issues
We hope to build a general framework to adapt to a variety of multi-edge collaborative inference scenarios. Therefore, users and developers are welcome to put forward more scenarios, applications and requirements to improve our architecture and CRD interface design.