b-it-bots / mas_perception

2 stars 13 forks source link

refactor `perception_libs` from mas_domestic_robotics #3

Closed minhnh closed 6 years ago

minhnh commented 6 years ago

name: Refactoring about: Refactoring packages

Description

What is being refactored?

Motivation

Organization

Current

.
└── mcr_scene_segmentation
    └── common
        ├── include
        │   └── bounding_box.h
        └── src
            └── bounding_box.cpp

Proposed

.
└── mas_perception_libs
    └── common
        ├── include
        │   └── bounding_box.h
        └── src
            └── bounding_box.cpp

Checklist

minhnh commented 6 years ago

@alex-mitrevski @sthoduka I'll also move object_recognition over once this is ready. The bounding box visualization can also be moved to perception_libs, though maybe in a different pull request.

alex-mitrevski commented 6 years ago

Hi @minhnh

For this to move into the repository, we absolutely need documentation that addresses at least the following points:

In addition, the merge request description should address why the merge request is necessary/what problems it solves; otherwise, I think it's out of context and it's difficult to see what its relevance is, particularly because this is a shared repository. For example, does the merge request replace existing functionalities and if yes, in what sense and why, i.e. what will be better after the merge request?

argenos commented 6 years ago

While this is a nice abstraction and could potentially be used by people outside the team, I agree that it doesn't make sense to make this an independent repo for now due to the fact that mas_scene_segmentation has a dependency on mas_perception_libs. My suggestion is to discuss this move again once the API is stable and the meta-package has been fully refactored. (See #4 ).

Using a server-client pattern also impacts introspection. Maybe we can agree with both @b-it-bots/home and @b-it-bots/work which things should be exposed to be able to monitor/diagnose things occurring inside the servers.

alex-mitrevski commented 6 years ago

The rationale for introducing an object detection service - or in general, having a server-client interface for all of these functionalities - was that it makes it simpler to use the functionalities effectively in a planning context (e.g. we detect objects on a table when we need to AND we don't go on with the other activities until we are done perceiving); in addition, world modelling is a bit more difficult if we just have topics of detected objects without a particular information on when/how the objects were detected.

Perhaps we could add a separate node that publishes the detections to a topic? I am however in favour of keeping the service.

argenos commented 6 years ago

I think there has to be some middle ground, can’t the server optionally publish this to a topic (maybe also passed as parameter)? This is also relevant for all people perception, this is done online with the streaming images from the camera

iawaad commented 6 years ago

Hello,

In case it helps at all: that is indeed what was done in the old pre-ROS days. You called the functionalities through their interfaces when you needed them. You also had some which were triggered to start, published their output (with time stamp and at times confidence values) and those components that were interested subscribed when they started up; and unsubscribed later; and could also be triggered to stop. Because some are expensive, they were only run on demand when called… others simply published simple sensor readings all the time.

Cheers,

Iman

On 19 May 2018, at 12:38, Argentina Ortega Sáinz notifications@github.com wrote:

I think there has to be some middle ground, can’t the server optionally publish this to a topic (maybe also passed as parameter)? This is also relevant for all people perception, this is done online with the streaming images from the camera

— You are receiving this because you are on a team that was mentioned. Reply to this email directly, view it on GitHub, or mute the thread.

minhnh commented 6 years ago

so maybe an action_lib is a more middle of the way? It's a little more flexible than services while still abstract away this on-demand model (instead of writing explicitly event_in). On the other hand, I think publishing of the object detection results ought to be done in the layer dealing with the abstraction since it's the same for all object detection pipelines. This means it doesn't matter if we do the detection as a service or topics or an action.

Edit: Here's a documentation on communication patterns. TLDR: actions seem to be the recommended choice here.

alex-mitrevski commented 6 years ago

Yeah, I think we can go for actionlib servers.

minhnh commented 6 years ago

@alex-mitrevski @sthoduka @argenos I updated the branch to use actions now. Could someone take a look and review/merge?

argenos commented 6 years ago

@minhnh devel was out of date with kinetic, can you rebase this before merging?

minhnh commented 6 years ago

merged with kinetic latest

minhnh commented 6 years ago

so... merge?

argenos commented 6 years ago

I think we still need a review from someone in @b-it-bots/work

minhnh commented 6 years ago

Didn't @sthoduka approved a while back? @deebuls plz