Open ChengshuLi opened 1 week ago
Here's our current understanding:
Consider this scenario - while all three objects are clearly touching/contacting each other, the RigidContactView gives all zeros for both impulse and contact. Even if we wake all objects after they fall asleep, we would still get the same result.
Now consider this assisted grasping scenario - when the robot fingers get in contact with the red cube, we would get non-zero impulse and valid contact data from RigidContactView (GripperRigidContactAPI in this specific case). That is exactly how assisted grasping is established. However, once the cube is grasped and things settle, we would again get zero impulse and contact data.
There are essentially three stages of a contact:
Presumably, the RigidContactView only gives valid data during stage 1. This is perfectly fine for assisted grasping, as AG is checked per physics step. However, this is not fine for some of our other use cases, e.g. ToggleOn state checking, as continual contact is required. This would explain why our ToggleOn test is so flaky.
There are currently three ways we deal with contact:
_on_contact
callback when any contact happensRigidContactAPI
implemented with RigidContactView
TouchingAnyCondition
(optimized version)SlicerActive
state checking if a slicer is touching sliceables; ToggleOn
state checking if robot fingers is in contact with a toggle button for some number of stepsContactBodies
state and as dependency for many states (e.g. ParticleModifier, Touching, SlicerActive)Draped
stateTouchingAnyCondition
(non-optimized version)