Closed laogui-wind closed 2 years ago
Does this happen if you do not use the micro-ROS Agent? If you subscribe using ros2 topic echo ...
?
Sorry, I didn't describe the problem exactly. The actual situation is that the memory usage of the application micros_ros_agent
is increasing.
Which version of the micro-ROS agent are you using?
the commid id is bc2287cc6e6a685f8796183aee5a9f3050a63d83.
Latest commit in micro-ROS agent foxy is 4dd8ffe63bee1e9d1e0400d734706d0be9c775ad, as you can see here: https://github.com/micro-ROS/micro-ROS-Agent/commits/foxy
Please use the latest version and let us know if the problem persists.
OK,thank you!
After updating to the latest version, we found that micro_ros_agent still had memory growth.
Could you provide steps for replicating this?
The micro-ROS client code is important for knowing how the whole system is performing.
Steps to reproduce the issue We reproduce this phenomenon based on a routine: https: //micro.ros.org/docs/tutorials/core/first_application_linux/ In the first terminal, we create an agent:
ros2 run micro_ros_agent micro_ros_agent udp4 --port 8888
PS: commit id is
4dd8ffe63bee1e9d1e0400d734706d0be9c775ad
In Terminal 2, create a subscriber named /microROS/ping:
ros2 topic echo /microROS/ping std_msgs/msg/Header
In Terminal 3, run the following script to loop through the creation of publishers:
#!/bin/bash
source /opt/ros/foxy/setup.bash
while true
do
ros2 topic pub --once /microROS/ping std_msgs/msg/Header "{'frame_id' : '1'}"
done
Actual behavior
By observing the agent through htop, it can be found that the value of RES is increasing:
I have replicated your whole environment in micro-ROS Humble and it does not seem to increase the RES memory. Could you make a test in the newest release?
My test is based on the foxy branch and the commit id is 4dd8ffe63bee1e9d1e0400d734706d0be9c775ad. What is the id of the latest release you are talking about? In addition, the above test may take one to two hours to have obvious symptoms.
I'm talking about v3.0.2/Humble, I'll take into account the time considerations in my next test
OK, thanks for your suggestion.
We will have many difficulties in changing the ros version. So, is there any commit on the humble branch that can be merged into the foxy branch to solve this problem?
I have tested the environment with micro-ROS Agent and I have been able to replicate your scenario. Also, I have replicated the environment using a standalone Micro XRCE-DDS Agent and there is no memory leak there. I we can infer that there is a memory leak in some part of the micro-ROS Agent, probably in the graph manager due to the nature of the scenario (that creates a new participant for each topic pub).
I will investigate more tomorrow.
Well, thanks for your quick reply and great assistance!!!
Hello @laogui-wind could you check this patch: https://github.com/micro-ROS/micro-ROS-Agent/pull/147 ?
Ok, we're testing the latest commit, thanks again for your help.
Please report so we can close this issue
After a period of testing, there is no memory leak. I think the problem has been solved, thanks for your help.
Issue template
Steps to reproduce the issue
Just run the following line of statements in a loop to reproduce:
ros2 topic pub --once <topic forwarded by agent> <msg-type> <msg-data>
Actual behavior
Using htop to continuously observe, the memory usage grows slowly