Closed xuchenhan-tri closed 2 years ago
Thanks for this, Xuchen! One concern: representing all deformable geometry in the world frame seems awkward in a multibody context, such as a deformable end effector on a robot arm, or any articulated multibody system in which some of the bodies are deformable. We normally expect that moving a body will also move everything outboard of that body.
Assuming that every mesh is defined (at least initially) in its own frame, would it make sense to keep that as the frame, position it relative to its parent, and then express the deformations in that frame?
That's an good point. I had considered it, but haven't convinced myself the benefits of keeping the deformable geometry's frame. Unlike a multibody system where we benefit from a chain of local frames in a pose update, there isn't much to be gained in the FEM calculation; the amount of work we need to do is the same whether the computation is done in world frame or the local frame. So on the FEM side, it'd be nice to keep everything in a single fixed frame to avoid remapping the vertex positions at every time step.
It could be argued that having the deformable geometries in a local frame would be helpful if we want to, say, render the deformable body bubble gripper from a camera mounted inside. But because we don't have the mesh information in the local frame from FEM computations, it may just be better to map it to local frames on a need basis instead of constantly mapping between two frames everytime the deformable body moves?
I'd like to hear the benefit we'd gain from the local frames that you have in mind.
As a multibody dynamicist I will confess to a certain reflexive horror of representing anything in the global frame. (For graphics that might seem more reasonable.) One of the worst things for computation is the variable precision produced by a world frame representation. Even for FEM the calculations are always local -- neighboring elements affect one another. Consider what happens as a mobile robot walks away from the World origin. The elements are represented by close-together vertices whose relative positions are the important quantities. Say two of them are at 0.126 and 0.127 in the mesh local frame. At 100 meters they are now at 100.126 and 100.127 and have lost two digits of precision. That is likely enough to put some noise into a derivative calculation. But even if not it is the non-physical feature "we're less accurate away from the origin" that bothers me.
@amcastro-tri and @mitiguy may have some well-informed opinions on this topic.
I can get behind that concern. The solution to that doesn't necessarily have to be a frame that is fixed to the parent frame though. A shifted world frame as the local frame (so we don't need to do extra rotations) seems to be enough?
We should talk more face-to-face. What would we like to happen if:
During this episode, the deformable pancake is stationary (no deformation, no rigid transform) relative to the breakfast plate. Locally the vertices of the pancake mesh do not change their positions relative to the plate.
Do I get it right that the proposal is to update vertex coordinates of the pancake in the World frame all the time at every time step from the kitchen to the table? How would the neat trick like "temporary joint locking/welding" work here?
Sometimes we might want a deformable object to act like a rigid object in a tree :christmas_tree: of multibody dynamics.
As I write this, I'm confusing myself :confused: . I'm not proposing anything concrete yet. I just give a scenario for us to think about in the next f2f meeting.
Thank you for putting this together @xuchenhan-tri. I think it would help if you had a companion design document (gdoc ok) where for each or some of those TODO items you show an example of what that API would look like. I think that'd help future discussions and also to define clear interfaces for the geometry team so that they don't need to get deep into FEM details.
Re moving frames for deformables. I do agree with @sherm1 that we should be careful not to loose precision. However, that is easier said than done. Choosing those frames is often times applications specific and I think we should only support the ones we care about. I think we'd only know which ones to care about when the functionality lands. @xuchenhan-tri's proposal of using the world frame for now does not hinder us from allowing users to define model specific frames. So I think that for as long as the APIs clearly document the frames (and even if their names reflect it) we won't be blocked on being to define model coordinates in other frames. I simulated moving grids myself during my PhD. Having a moving frame is definitely convenient for applications like cars (or ships) that travel long distances. I think thus far for us the focus is things like deformable manipulands and robot mechanisms (e.g. grippers, fingers) which are very local to the robot's model.
TL;DR I like @xuchenhan-tri's proposal as is, modulo looking into the proposed APIs so that we can coordinate better with the geometry team.
@amcastro-tri I am preparing a design document right now. I'll share it when it's ready. Re moving frames - I think it would be quite straightforward to support local mesh frames that are not fixed to the world frame. The only FEM component that we need to touch is a change of frame operation on the state. On the other hand, I don't think we want to support a "moving" frame, and that's not necessary to address the precision issue that @sherm1 mentioned.
@DamrongGuoy Whether we pin the deformable geometries to the world frame and whether we can support sleeping deformable objects are two perpendicular issues. We can choose to do/not do one without affecting the other. The framing issue dictates when and where we pay the price of a frame transformation. The sleeping capability is more nuanced. Correctly putting objects to sleep and waking them up is not a simple task, and I don't think we support that in general in Drake.
In a f2f conversation with @xuchenhan-tri, we discussed what local frame might be helpful, not just in terms of accuracy but also in terms of utility. For a highly deformable body (or a set of free-flying points), one can define a RigidTransform that characterizes the system in various ways -- e.g., with least squares, principal inertia axes, etc. I think it helpful to understand what inherent value that defined RigidTransform provides (versus perhaps other/perhaps simpler solutions).
That sounds like a can of worms to me @mitiguy (I've done it). Especially for a first pass at this. @xuchenhan-tri's design would not prevent us from adding special cased reference frames in the future if we have the need for it.
@amcastro-tri -- To clarify, the f2f discussion with @mitiguy is more like food for thought instead of actual request.
Yes, I know. Thanks
I'm so happy we can close this issue now. It took us from the first PR in April to now the last PR in September to close it.
Thank you @joemasterjohn and @ggould-tri for a super-quick review of the last PR #17880 in the series.
Close now!
The missing pieces like making deformable meshes for other primitives (boxes, capsules, cylinders, convexes) in addition to Sphere and Mesh belong to subsequent issues.
As we close in on supporting simulating deformable bodies with FEM, we should start thinking how to represent the geometries of these deformable bodies.
At the moment, our geometry nexus, SceneGraph, doesn’t have a notion of deformable geometries. All geometries are represented as posed shapes, which would be insufficient for deformable geometries considering the additional degrees of freedom. For now, I'm only interested in the deformable geometries taking on the proximity roles (e.g. used for collision detection) and visualization roles. More specifically, I want to support the following:
The following will not be considered for now:
Proposal
It’s natural to represent deformable geometries as deformable volume mesh in the foreseeable future, where we define deformable volume mesh as a tetrahedral mesh with constant topology/connectivity and modifiable vertex positions.
multibody/fem
namespace. (#16733)Unfortunately, this natural representation doesn’t fit in the posed-shape view of geometries that SceneGraph takes (i.e. each geometry is characterized by a shape + the pose of the frame it attaches to as well as its pose relative to the frame). Therefore, I propose we establish the dichotomy between deformable geometries and rigid geometries. All deformable geometries will be associated with a single frame that coincides with the world frame. The assumption that all vertices are represented in the world frame aligns with that made in the FEM code. The deformable geometries are then characterized by a constant volume mesh topology and their states should be characterized by the vertex positions of that volume mesh in the world frame. In particular, there will be no frame hierarchy as in the rigid case.
Proximity
To perform collision queries between deformable and rigid geometries, we need a deformable volume mesh and a rigid surface mesh along with appropriate acceleration structure. Some of these infrastructures are already introduced in (#15287). The rigid representation required in deformable contact is similar to the one used in hydroelastic contact, and we should be able to reuse rigid hydroelastic representations. We also need an (approximated) distance field for each deformable geometry to provide penetration distance information at contact points with rigid geometries.
Visualization
For visualization, we will still rely on drake_visualizer for now even though we are gradually moving away from it (#16215). Meldis (introduced in #16263) can be strengthened to forward messages for deformable visualization, and it should help ease the transition to MeshCat.
We adopt the same dichotomy between deformable and rigid geometries – information about deformable geometries’ connectivity and vertex position updates will be communicated through new LCM channels and messages. Since the mesh topology of the deformable geometries do not change over time, we can split the work of communication into a load phase (communicate mesh topology) and a draw phase (update vertex positions) similar to the current logic of DrakeVisualizer. There is a prototype for this idea introduced in PR #14697.
TODO list
#17383#17463SendLoadMessage
andSendDrawMessage
in DrakeVisualizer. (#17253)(#17808)I have some code fragments in this working branch #16869 towards the TODO's above.