Closed jeising closed 4 years ago
Thanks @jeising for the good summary. The Python process seems to create a subscriber every 100ms. Is there a 100 ms loop with the spin(). So far we destroy the publisher and subscriber resources in the shared memory not before cleaning up the whole process. So if the python side would create and destroy the publishers and subscribers with every spin I could explain this behaviour (and would also know what we have to do).
I feel there are two tasks:
What would you suggest to do, @michael-poehnl? :)
This seems to be the place where the wait_set
is recreated on every spin.
Is this by intention? @wjwwood @Karsten1987
For the iceoryx
side I see, that we could implement the functionality to remove ports from applications or the functionality to remove runnables
from applications and put the subscriber for this wait_set
in its own runnable. The later functionality is already mentioned in the code.
@michael-poehnl
rcl_py creates and destroys a waitset with every spin. The question is if this is really necessary but nevertheless the rmw_iceoryx must be able to handle this. In the past iceoryx was used only in setups where resources were created once and not destroyed before the termination of the user process. So it somehow is a bug in the context of rmw_iceoryx and an extension of iceoryx. I created an issue in iceoryx for this https://github.com/eclipse/iceoryx/issues/51
Is this by intention? @wjwwood @Karsten1987
I'm not sure if it is required or just an unintentional issue that could be optimized. Maybe @sloretz can comment, he worked on the executor design in Python.
rcl_py creates and destroys a waitset with every spin. The question is if this is really necessary but nevertheless the rmw_iceoryx must be able to handle this.
This, even if rclpy
could be changed to avoid this, iceoryx
must be able to handle it, imo.
This seems to be the place where the wait_set is recreated on every spin. I'm not sure if it is required or just an unintentional issue that could be optimized.
@jeising @wjwwood rclpy
creating a new wait set every spin()
isn't required, just unoptimized. It should be possible to reuse the wait set structure when the guard conditions/timers/clients/service servers/etc are the same between calls.
(..)
rclpy
creating a new wait set everyspin()
isn't required, just unoptimized. It should be possible to reuse the wait set structure when the guard conditions/timers/clients/service servers/etc are the same between calls.
@sloretz: Should we create an issue in the ros2 rclpy
repo to keep track of this?
I would close this issue for it seems to be fixed on our side
Description
While running
ros2 run demo_nodes_cpp talker
andros2 run demo_nodes_py listner
,RouDi
terminates ungracefullyError message
How to reproduce
Versions & Setup
Using
osrf/ros:eloquent-desktop
withgit clone --branch v0.16.0 https://github.com/eclipse/iceoryx.git src/iceoryx
git clone --branch v0.16.0 https://github.com/ros2/rmw_iceoryx.git src/rmw_iceoryx
The docker container gets 2 GB of shared memory.
For all terminals
Terminal 1
Terminal 2
Output of this command is:
Terminal 3
Error
After a few transmissions, which work as expected,
RouDi
prints the mentioned error message and terminates. It does not stop the connected processes.RouDi in verbose mode
Starting the cpp process makes RouDi print
From start of the python process RouDi prints below text
Variants of this issue
Running:
RouDi
,ros2 run demo_nodes_py talker
andros2 run demo_nodes_py listner
RouDi
,ros2 run demo_nodes_py talker
andros2 run demo_nodes_cpp listner
RouDi
andros2 run demo_nodes_py talker
RouDi
andros2 run demo_nodes_py listner
RouDi
,ros2 run demo_nodes_cpp talker
andros2 topic echo chatter std_msgs/msg/String
Results in the same error
How not to reproduce
For using only cpp demo nodes
ros2 run demo_nodes_cpp talker
andros2 run demo_nodes_cpp listner
there seems to be no issue.