Closed rhaschke closed 8 years ago
I don't use the Rviz plugin and don't run into these race conditions in my normal preferred workflow so don't fully understand the details at play here, but I'm glad you are tackling this. Hopefully @v4hn can help. Perhaps @jbohren has some insight since he's improved MoveIt! race conditions in the past. I'll continue to do high-level reviews.
Proof-of-concept implementation on top of #713. Needs code cleanup! Please don't complain (yet) on code style ;-)
@v4hn I cleaned up the commit history. Please review now. I do now get an exception on destroying a static mutex when closing rviz. Cannot yet tell, where this comes from. Seems to be related to unloading of a (static?) CollisionPlugin.
TODOs: Check python interface. Can we go without sleeps now? Check current state before executing trajectory: Is initial trajectory point == current state?
Python API works without sleeps like a charm. Added state validity check to MoveGroup interface. ABI compatibility by moving new class members to the end of the struct. Ready to merge.
Can you rename the subject of the PR to something more descriptive than work-in-progress for future reference? and so to ensure its ready for merging ;-)
Also, with 8 commits in this PR its my experience that reviewers request they be merged into fewer (preferably 1).
Addressed @davetcoleman's codeing style comments. I would like to keep the large number (8) of commits, because each of them has a clearly separeted semantics and could be use to roll back individual parts. At least I carefully designed those commits when cleaning up. Before final merging, I will cleanup the new fixup commits. Simply give me a go, if you are happy and I will do the final merge (as well as cherry-pick to jade and kinetic).
No problem if the commits have actual purpose and are not simply incremental changes
Sorry for all the pickiness in my reviews, but most of my coding style was learned from MoveIt! so I easily see discrepancies. And @isucan would have made the same comments because he used to for my PRs :)
Thanks!
gnarf... you're too hasty. I was reading that request right now.
@v4hn: I want to progress ;-)
I also force-pushed a new version of the branch cherry-picking-716-717
to let Travis do its work.
If they are no complaints, I will fast-forward jade-devel
to that branch and finally merge this into kinetic-devel
later this evening.
@rhaschke do not merge your own PRs, especially if no one in this discussion thread ever gave you a +1
. This rule has been in place for years now. I appreciate you addressing all my feedback and there wasn't anything else I would have wanted changed, but @v4hn was in progress reviewing this PR when you merged it without his approval. Also, in this comment above I said I wanted someone else to review this because "I don't fully understand the details at play here".
In the past it has taken months to get PRs merged, to my great frustration, and I'm glad things are moving faster now, but you didn't say "Please review now" until 3 days ago so don't rush things too much. I've been daily reviewing this PR.
Anyway, thanks for all these great improvements and bug fixes!
@davetcoleman You are right. I was too hasty. @v4hn Did you had any comments to the content of the PR? If so, please post them, even if the core of the source is already pushed. There are two cherry-pick branches pending (for jade and kinetic) and I'm definitely awaiting your feedback before creating final PRs for them.
The reason why I didn't review this request in the last few days is simply because I didn't have the time. This is nothing as trivial as adding missing dependencies, and I didn't get around to look at it yet.
I'll try to read through the whole request tomorrow and post feedback.
@rhaschke: Thanks for all your work!
@v4hn Playing around with the new fake controllers, I noticed another open issue: The last scene update is not always received in time while waiting for PlanningScene synchronization.
@v4hn thanks for careful reviewing. I addressed - hopefully all - your comments in #724.
This wip branch is intended as joint collaboration branch to fix the race conditions in scene updates reported in #442. All developers are invited to contribute via additional PRs. I'm trying to centralize the discussion on this issue here.
Related PRs:
442 (reported issue)
671 (active waiting work-around)
686 (synchronous update work-around).
As pointed out in a comment of #671 the fundamental underlying reason might be that scene update messages pile up in the callback queue (due to sequential processing by a single spinner thread). Hence, not only pending joint state updates may be missed, but also other scene updates. Hence I suggest to modify
planning_scene_monitor::LockedPlanningSceneRO
such that it waits until all scene updates withtimestamp < ros::Time::now()
are processed and only then returns the current planning scene. Only this will ensure, that all pending scene updates are considered at a given time.Proposed concrete changes:
Before starting along that route, I'm waiting for positive feedback from some developers. And of course, I hope for active contributions. Please announce before you start to tackle a sub task.