Closed CharlesAverill closed 2 years ago
This is still an issue, and a significant one with the amount of data I'm generating. Are there known issues with the MotionPlanningWrapper
class that would cause it to ignore static objects when planning arm trajectory?
We can try to help with this - first of all, can you check if all wall objects (they have a category of walls
) are in your set of obstacles?
wall_ids = set(bid for wall in scene.objects_by_category["walls"] for bid in wall.get_body_ids())
assert len(wall_ids) > 0, "No walls found in scene"
assert wall_ids.issubset(obstacles), "Some walls not listed in obstacles"
@cgokmen I appreciate the help. I have confirmed that the assert statements in your example do not fire, so all of the wall IDs are in the MotionPlanningWrapper's mp_obstacles
list. The body IDs of the objects I've spawned are also in the list, however the arm moves into them as well.
@cgokmen I've been looking at this more. I'm curious, should len(wall.get_body_ids()) == 1
? I'm seeing a single body id from this list. Makes sense if all of the walls are a single entity, but figured it was worth asking. The more I look at this, the more I think there's something in the MotionPlanningWrapper that's nullifying the "avoid obstacles" effect
Sorry I missed your last message. It is indeed expected that the walls have a a single body ID. Are you running the wrapper with full_observability_2d_planning=True and collision_with_pb_2d_planning=True? Can you try checking if you can call plan_base_motion_2d
directly with your desired arguments and get the plan you expect? (It has an integrated visualizer)
I'm using plan_arm_motion
, but I can try a quick debug script with base motion. FWIW, I have full_observability_2d_planning=False
, collision_with_pb_2d_planning=False
, visualize_2d_planning=False
, and visualize_2d_result=False
. Setting them all to true did not open a planning window like in the base movement demo
@cgokmen I dropped a table into the standard motion planning example, and the motion planner did not avoid it, instead driving straight through it. Here's the modification: https://gist.github.com/CharlesAverill/8152b93f0ecf041a4befba6ae418adc7 and here's a video:
Hi, sorry, I'm not completely sure I understand the issue, or, if I understand, they are separate issues because the arm and base motion planners are independent, and very differently implemented. The arm motion planner will move the robot in pybullet and query the simulator for collisions (expensive). The base motion planner uses A PRECOMPUTED MAP of the environment and does 2D motion planning (fast). The execution of the base motion plan is not a real physically simulated execution; that would require passing the plan to a trajectory generator (e.g., interpolator, splines generator...) and execute them with some controllers. Instead, we just "teleport" the robot following the found path for visualization purposes.
For your last issue, if you load an object, the precomputed map for the base motion planning is wrong. Then, the paths found are not guaranteed to be collision free, as in your example. To solve that, there is a version of the 2D motion planner that you could use:
collision_with_pb_2d_planning = True
This should use pybullet to check for collisions, so you should be able to add any objects
For the arm motion planner, yes, the flags you were discussing are only for the 2D base motion planner. There is no flag to visualize the arm planner.
Can you check in the arm motion planning code that the wall is in the list of obstacles?
Thanks
I understand, wasn't aware that the environment collisions were precomputed.
As for the separate issues, I just wanted to see if my arm issues were caused by something I had done, so I went to the motion planning example to see if the same issue would happen. I'm not concerned with base motion, just the arm.
In the arm motion planning code, I can confirm that the wall, and all objects spawned after the scene was initialized, are in the list of obstacles.
Hi @CharlesAverill, thanks for the reply!
Can you send me some example code to reproduce the issue? if the walls are in the list of obstacles, it is really strange the plan goes through it, it shouldn't happen.
Setting a breakpoint within the collision function could also help - are collisions being checked for those body IDs? What is pybullet returning?
@roberto-martinmartin We've built up a framework around iGibson so I'll get a simplified version running and share it here sometime today
@cgokmen Just to be clear, it's this collision function, correct?
@roberto-martinmartin It's a bit of a behemoth, but here it is: https://gist.github.com/CharlesAverill/6ae16bfcc53d544092bd4b1b38a619ef. I've confirmed that the motion planner will collide with the spawned table and objects, and the wall when it spawns close enough.
Thanks for the code snippet, @CharlesAverill . Could you tell me what is the expected behavior and how it is failing? thanks!
References to what part of your code is expected to do what would be very helpful!
I may not understand what you are trying to do or the problem you are facing. is the provided example your use case?
is the problem that the robot's end-effector collides with the table and objects? This is expected: since we want the robot to use the end-effector to interact, we deactivate collision checking for the end-effector, including fingers (https://github.com/StanfordVL/iGibson/blob/master/igibson/utils/motion_planning_wrapper.py#L459), which allows the hand to contact objects. You can change this behavior by setting the allow_collision_links
to be []
. But in that case, I think your code will often fail, as you are requesting the robot's end-effector to move to the position of a randomly selected object, this position is the position of the object's reference frame that could be inside the object, or for small objects, very close to the table, and the frame you request to move there is the robot's end-effector frame that I think it is defined inside the robot's hand. So the final configuration is often in collision and the planner will return failure.
Again, if you explain what is the behavior you are trying to obtain and the lines in the code that should get to that, it will be much easier.
Thanks!
Hi @roberto-martinmartin. The intended behavior is as follows:
What's actually happening is that the motion planner doesn't avoid the table or objects (or walls when the robot spawns close enough) causing the arm to collide with many more objects than it should, which is generating poor training data for our model.
The provided example is our use case. The end-effector should collide with objects, but only minimally with objects that are not the selected one.
I'll update the snippet I linked to above with some more descriptive comments.
This is where the motion planning starts. The motion planning function in the Fetch class are already fairly documented. You can see I remove the target object to push from the obstacles list already, so your allow_collision_links
solution could work, I'll try it tomorrow.
Great! Let me know if that helps. Another option is that you plan a motion over the object, for example to (x,y,z) - (0,0,0.1) and then you just manually go down. this is similar to what we did to push objects.
@roberto-martinmartin The "pick point above and move downwards" method works fairly well. Picking the point above can be done with just IK, and then motion planning can be used to find a path downwards. However, for added realism, it might be worthwhile to add an allow_collision_links_override
option to the arm movement planning function so that motion to the above point can be executed through the physics engine.
do you mean instead of "teleporting" the robot joints to the positions of the trajectory? Yes, for the interaction with objects you need that but this requires a controller (or better a motion generator, for example, some kind of spline generator) so that the trajectory of via points in joint space is executed physically. For the interaction part (from the point above towards the object) I wouldn't use the motion planner to plan downwards unless you are going to implement such a controller. I'd instead pass directly the final point (location of the object) as goal to a Cartesian controller so that it simulates physically instead of just changing the arm configuration.
A trajectory controller taking as input a sequence of joint configuration and outputting robot commands (or directly sequencing those to a joint controller and checking completion) would be great to have, but we do not have it right now, we just "teleport" the joints along the trajectory for visualization, making physics unrealistic. If you implement a trajectory controller, that would be awesome! Feel free to submit a PR and we would def include it
I'm going to close this issue, I think we covered the problem?
I am still teleporting the joints to each point on the trajectory. I'd like to add in a trajectory controller, but I have to put the research first. Perhaps later in the summer I'll be able to put time into it.
As for the change I suggested, I was just referring to how the plan_joint_motion
function has no user-accessible way to force collision checking for the hand and fingers of Fetch, and maintaining an internal edited version of iGibson is messy. I'll create a PR for it, it's a tiny change.
And yep, I think we've hit the closing point. Thank you so much for your help through this!
oh, the "flag" for activating collision checking for hand+fingers is a great idea! thanks for that!
@roberto-martinmartin Hey I do have one follow up question. I have trained the locomotion controller for the Locobot and wanted to learn in addition to that some other skills in a hierarchical fashion. However, transferring the policy to the Fetch robot seems not to work, since they have very different Diff-drive behaviors. Therefore, my questions is, if it is possible to use the motion-planner for the Locobot arm as well ? (I did not go into much detail there, perhaps its just the arm-wrapper class) But Thanks for any information :)
Hi @sycz00 , I'm not sure I understand 100% your question. You say you train locomotion and try to transfer from locobot to Fetch and your policy suffer because it is low level control and changes with the changes in the robot base. But then you mention something about the arm of the locobot? I get a bit lost there, sorry.
You can train using the base planner as action space abstraction and that should facilitate transferring between embodiments because an action of "move x,y meters wrt your current location" would be the same for both robots but implemented differently by the planner+controller.
Is that your question?
Hey, @roberto-martinmartin
Okay sorry, I see your point of confusion... :D Those are two different things I was talking about. First, I was talking about the lack of transferring a RL-policy from the locobot to the Fetch robot. The policy has been trained in iGibson 2.0.3. (in the newer version, a lot of changes have been made in the controllers etc.. so training it in the newer version might solve this ?) A general observation is, that the controller (both differential drive) work the same but in practice, the policy on the fetch robot behaves very different. (I adjusted linear and angular velocities, but still...).
The second and more important questions to me was: Since transferring the policy is not working as expected, I want to work with the arm of the Locobot. But all examples so far are using the Fetch arm only. My question is, if there's a minimal working example or are the functionalities in general the same ?
Greetings, Fabi
Very late reply, I'm sorry, but I hope it still helps. 1) locobot to fetch. Policy trained in iG 2.0.3. Both with differential drive. If I understand this (correct me if I'm wrong), you are training a policy in one robot with diff drive and transferring it to another. This will fail because, even though the action spaces are the same, the dynamics are very different, leading to different behavior. For example, a speedy curve trajectory that may work with one robot can make the other robot tip over. Think of it like this: is like you were driving two cars using the same commands (pedal push, wheel turn) but both cars would be different, or the same car but one empty and the other completely loaded. If you want your policy to generalize to different robots you could 1) train with that, or 2) use an action space that takes care of the difference. Continuous control will be hard.
2) examples of motion planning with the arm, or in general, control of the arm of a locobot? no, we do not have those examples, you would have to modify the existing ones for Fetch
I hope this is still helpful!
Hey
yea a little late but still thanks for your reply. I kind of got along with the way you set the positions of the agent for the base.
For the grasping, I spawned a cabinet into the iGibson scene and I reduced the scale of it. The Idea is to spawn plates and other stuff inside the cabinet, and the agent needs to find them. Now I am using parts of the motion_planning example to figure out the fixed offset for the agent's base and the grasping point. That is still somehow buggy (the gripper tips move away in some of the cases). Then I found the function : from igibson.utils.grasp_planning_utils import get_grasp_position_for_open
. But there's no meta-data available for the cabinet. Do you have any suggestions using this functionality ?
@cgokmen could you provide some info about this functionality?
@sycz00 Hi, this feature should let you sample a grasp pose such that if executed, it will let you open the cabinet. You can see an example of how this is used here. What metadata seems to be missing for the cabinet, are you getting a particular error?
@cgokmen Hey:) Thanks for your reply. When using this method, the hitting positions are always way off. I tried it with doors (the ones which are already in each scene), with fridges and cabinets. In all situations, the grasping positions were not correct. Anyways, as far as I can see in the code snippet, these functionalities are only available for the behviour robot. Since Iam using the Fetch, I assume this won't work at all ? Thanks in advance :)
Ah, yes - these positions currently are intended for the BehaviorRobot.
Motion-planned arm movement will not avoid walls in an interactive scene. Do walls have a body ID like floors that should be appended to the MotionPlanningWrapper's obstacles list?