Closed bit-pirate closed 10 years ago
I always intended for you to be able to do things like (with services):
get_providers(interface="LED", allow_semantic=True)
>>> [provider_led1, provider_led2]
start_capability(interface="LED", preferred_provider="provider_led1")
>>> True
It may be the case that it isn't working like that now, I'll look into it.
I remember I hit errors on capability server start up, i.e. capability unsupported, because no provider is given although semantic providers are available. I'll get you a log the next time I run it.
I don't think this is working correctly, I am working on a fix now.
Upgraded to a pull request.
This is a partial fix. I think in order to support this correctly I need to implement the automatic remapping for Semantic Capabilities.
The issue I'm currently facing is, that I can't set a semantic provider as the default provider for the interface the semantic interface is derived from.
Following the example explained above, I'd like to do this:
<param name="defaults/kobuki_capabilities/LED"
value="kobuki_capabilities/kobuki_led1" />
<param name="defaults/kobuki_capabilities/KobukiLED1"
value="kobuki_capabilities/kobuki_led1" />
However, the capability server currently does not let me do that:
[ERROR] [WallTime: 1383653551.119118] Given default provider 'kobuki_capabilities/kobuki_led1' does not implment interface 'kobuki_capabilities/LED'.
PS: I checked out the issue_30 branch for my recent test.
PS: I think commit f6ce5b3525d917da509c8cdccb7c78ffe93ecfd9 fixes the inconveniency issued in #32.
Ok, I'll check into the startup and default parameter stuff as related to using semantic interface providers on normal interfaces.
Again, this works logically, but if there are any remappings between the semantic interface and the normal interface it will not do any automatic remapping of the provider's launch file yet.
I'm not sure, if applying remappings for semantic interfaces is always feasible. See #22 for details.
Again, this works logically, but if there are any remappings between the semantic interface and the normal interface it will not do any automatic remapping of the provider's launch file yet.
I think, I'm facing this problem now.
While the app manager applies the correct remappings to the apps using an implementation of the decision discussed in #22, the needed remappings are not applied to capabilities, which depend on other capabilities.
In order to apply the correct remappings for capabilities, the capability server needs to check not only the interface the provider depends on for remappings, but also the its provider.
I guess, the implementation for the capability server would be similar in functionality to the app manager's.
Thoughts?
bump
This is also needed for a proper first implementation on the TBs.
I rebased this branch. @bit-pirate does this implementation work for your usecase or is there anything else it'd require?
I think that there are some conceptual issues outstanding here related to the remappings. Maybe you and I can sit down together at some point and go over the problem. I'll contact you offline.
The missing feature is correct remapping. Check my last comment above and follow the link to #22.
The interesting part starts at https://github.com/osrf/capabilities/issues/22#issuecomment-27927315
So, what is important is that the topics (parameter, services, etc.) of an interface may be overwritten/redirected/remapped by the provider. Now, dependent apps and capabilities only know the default topic names the interface defines. Hence, there needs to be a way that possible remappings of the default topics can be retrieved and correct remappings applied.
The example from the previous thread (kobuki):
The provider for LED (kobuki_bringup) specifies /mobile_base/command/led1
for the interface topic /led
. The provider for KobukiLED1 (fake/virtual/empty provider) would do to the same for /led1
and the provider for KobukiLED2 would specify /mobile_base/command/led2
for /led2
.
Now, apps and caps requiring the LED capability would by default remap their topics to /led
(resp. /led1
and /led2
). To fix this both the capability server and app manager need to check the providers for possible remappings and apply them to the dependent app/cap, if necessary.
Hope this makes the issue clearer.
knock knock
Has there been any progress on this issue? Like to get this cleared out soon.
Sorry @bit-pirate, I am spending today on this issue, hopefully I can figure out how to resolve it and close it out today.
@bit-pirate I am still working out how to do this even for the simple case where you have one capability running at a time.
For the scenario where you need a remapping, but the provider which actually provides the topic has already started, I think we will either have to restart the capability or do some sort of topic relay. The problem with restarting the providers with different remaps is that they might already be in use under the old name, so renaming them does not work conceptually. With the topic relays we can achieve the equivalent to aliasing (providing the same data over different topic names), but like you pointed out before at the cost of relaying the messages.
Also, I committed the example I had been working with, but none of my experimentation with solutions.
I pushed some more changes, I think I am preparing to implement a relay system.
I just don't see any other way of being even close to consistent if we don't do relaying of topics. You could imagine a more intelligent system which could restart running capabilities if a new remapping is needed, but even then it does not address the scenario where you need to provide both the semantic and non-semantic versions of the topic or service.
Obviously the biggest drawback on the relay method is potentially unnecessarily sending data over the network more than once, especially with large messages and on resource constrained hardware. To try and address, I have implemented the relay as a lazy nodelet, meaning that it will not subscribe to the incoming topic unless someone is subscribed to the outgoing one and it will do shared_ptr passing with any other nodelet's sharing the capability_server provided nodelet manager.
Left to do yet is to integrate the spawning of relay nodelets automatically and expanding the relay to include services, parameters, and actions.
Feedback?
I was actually thinking of a much simpler solution, which stores possible remappings internally and makes this information accessible to clients, e.g. the app manager.
Using the above LED example:
The capability server detects that the provider for LED
remaps /led
to /mobile_base/command/led1
as well as the other providers remap led1
to /mobile_base/command/led1
and /led2
to /mobile_base/command/led2
. This information is stored.
Now, when the app manager starts an app requiring the LED
capability, it queries the capability server for remappings of the default topics and applies them, if needed. That is, it the app's topic /my_app/led
gets remap to /mobile_base/command/led1
.
With dependencies and semantic interfaces it gets more complicated, but still feasible.
Imagine:
Interfaces:
LED
with /led
LED1
redefining LED
remaps /led
to /led1
ChristmasTree
with /tree_led1
, /tree_led2
etc.Providers:
LED
provider my_led
remaps /led
to /my_led1
LED1
provider my_led1
remaps /led1
to /my_led1
ChristmasTree
provider my_christmas_tree
depends on LED1
, LED2
etc. and remaps /tree_led1
to /led1
etc.In order to resolve the correct final remapping the capability server needs to go down the whole chain.
Examples:
/led
-> /my_led1
/led1
-> /my_led1
/tree_led1
(-> /led1
) -> /my_led1
Now, these chains could be of arbitrary size[1], e.g. interface -> semantic interface -> semantic interface -> semantic interface -> ... . Hence, iteration would be needed.
I'm under the assumption the proposed solution works for all cases, we are currently considering. @wjwwood am I missing something?
[1]Do we check for cyclic dependencies?
Ok, that makes more sense, I was trying to somehow bend the topics provided such that if you start LED
and it says it provides /led
then after starting it rostopic
should show a topic /led
. This is much simpler if the app manager can do the run time wiring.
@bit-pirate, thanks for your patience, I finally got my head wrapped around this one and I think I have the minimum viable ROS service for you to test out.
This is how I tested it:
$ roslaunch src/capabilities/launch/demo.launch test_case:=remapping_examples debug:=true
$ rosservice call /capability_server/start_capability robot_base_pkg/RearDistance ''
successful: True
$ rosservice call /capability_server/get_remappings robot_base_pkg/rear_distance
topics:
-
key: rear/distance
value: robot/rear/distance
services: []
actions: []
parameters: []
$ rosservice call /capability_server/get_remappings robot_base_pkg/Distance
topics:
-
key: distance
value: robot/rear/distance
services: []
actions: []
parameters: []
In the robot_base_pkg
example:
robot_base_pkg/robot_base
provides the topic robot/rear/distance
, and the robot_base_pkg/distance_rear
providers depends on that and remaps robot/rear/distance
to distance
. Finally the robot_base_pkg/rear_distance
provider remaps the distance
topic to the rear/distance
topic. The ~get_remappings
service call for the RearDistance
interface (or its provider) returns the "flattened" remapping of rear/distance
=> robot/rear/distance
. The same service call for the Distance
interface returns the remapping distance
=> robot/rear/distance
. In both cases the actual topic (provided by the robot base) is robot/rear/distance
.
This looks good!
Question: I'm assuming that the flattened remapping is based on the default providers used for each interface. When determining the flattened remapping, do you take into account the remappings of the actual running provider as well?
I'm assuming that the flattened remapping is based on the default providers used for each interface. When determining the flattened remapping, do you take into account the remappings of the actual running provider as well?
I should have mentioned that, you can only ask for the remappings of things which are running, you will get an error if you try to call ~get_remappings
on something that is not running. Therefore, the default providers are not used at all by the remappings calculation code, which is here:
https://github.com/osrf/capabilities/blob/issue_30/src/capabilities/server.py#L925-972
If this looks ok to you, I will polish it up a bit with tests.
you can only ask for the remappings of things which are running
Figured that out when implementing this new feature, but forgot to update my comment.
If this looks ok to you, I will polish it up a bit with tests.
Yes, it does! Works nicely with my Kobuki caps & apps tests. I already updated my dev-branch for kobuki and the app manager. Please merge this PR at your convenience. Then I'll go ahead and tackle the TB apps.
For example:
interface: LED semantic interfaces: LED1, LED2 providers provider_led1 (LED1), provider_led2 (LED2)
Since the above providers implement a semantic interface derived from LED, it would be nice, if they could serve as provider (and default provider) for LED.