Closed wjwwood closed 7 years ago
I also noticed that the one on mini2 had 22 fewer test failures than the ones on mini1 (e.g. http://ci.ros2.org/view/nightly/job/nightly_osx_release/487/), so it might depend on machine configs/performance somehow
I'm currently running on mini2, to try and get a failing workspace in place.
The delta is currently at +140 failing tests, so maybe the 22 fewer tests is a red herring. I'll look into it though.
By the way, of the merged (currently 17) PRs, 11 of them can't possibly be related because they are only built for the turtlebot demo (and thus, only on Linux). That includes anything in the ros_astra_camera repository, the ros2/ci change, the turtlebot2_demo change, the cartographer_ros change, and the joystick_drivers change. Thus, it should be easier to focus on the remaining 6.
Thanks @clalancette, I was already narrowing down the pr's in this way, but it's a good point.
At this point I'm thinking that none of the pr's are causing this. I think it must be a machine configuration or something else. I'll continue to look into it.
I think I found the issue, which is related to uninitialized memory in rclpy since adding the new QoS setting that avoids ros name conventions.
I wasn't considering it originally because I think I had the dates slightly too strict in the search parameters. I'm trying out a fix now.
The tests that are failing are all test communication tests between rclpy and rclcpp, some facts about them:
Things I have tried:
Right now I'm trying to get a new run of the job to fail, then I'll take that machine off line while the workspace is in place and rerun on that machine. Other ideas are welcome.