intel / ros_openvino_toolkit

Apache License 2.0
148 stars 74 forks source link

Creating a Node to Include ROS Topics and Services. #70

Open sradmard opened 4 years ago

sradmard commented 4 years ago

Hi, I am trying to create a new node under vino_sample/src that can provide ros topics as well as ros services. Basically I would like to combine the functionality of pipeline_with_params.cpp and image_people_server.cpp, so that I can have topics publishing inference outcome, while I can call a service to load images. This is useful for people reidentification while reading from recorded images. Given the pipeline design structure, I am looking for advice on what is the best way to implement it. The pipelines are initialized differently. In the vino_sample/src/pipeline_with_params.cpp the pipeline is initialized as follows: for (auto & p : pipelines) { PipelineManager::getInstance().createPipeline(p); } PipelineManager::getInstance().runAll(); PipelineManager::getInstance().joinAll();

While in the vino_core_lib/src/services/frame_processing_server.cpp the pipelines are initialized as follows: for (auto & p : pipelines) { PipelineManager::getInstance().createPipeline(p); } ros::ServiceServer srv = nh_->advertiseService<ros::ServiceEvent<typename T::Request, typename T::Response> >("/openvino_toolkit/service",std::bind(&FrameProcessingServer::cbService,this,std::placeholders::_1)); service_ = std::make_shared<ros::ServiceServer>(srv);

Apart from the pipeline implementation, the yaml file configuration may also need to change since the inference for the service might be different from the topic. I would appreciate any thoughts on this.

LewisLiuPub commented 4 years ago

Dear @sradmard sorry for late reply. It is a really good question and thank you for considering of such optimization in advance. Actually, I have merged service-server login into vino_core_lib before, but it is only in ROS2 version. But I didn't test whether its behavior and results fit the expectation, when both camera input and service input are enabled in one DL pipeline.

One of the constraints of the package is that there is no logic to filter the inference results by input types. for example, we have two resource (camera and picture from service calling), we can't differentiate which result is for camera input, and which is for service calling.

So, if you have requests enabling 2 or more input resources for 1 DL pipeline, please let me know more context. Then we can trigger deeper implementation for missing logic.

Thanks.

sradmard commented 4 years ago

Dear @LewisLiuPub, thank you for your thorough response. What I am planning to do is basically recreate the face_recognition_demo in ROS. In that case, providing the link to a gallery of pre-recorded images with labels would be done through a ROS service call, while the camera feed is being processed and its inference gets published on a topic. For this application, the requests for Face detection DL pipeline would come from two inputs of image and camera_topic, while the request for Face reidentification DL pipeline would only come from the camera_topic input.