Closed manas-avi closed 2 months ago
We will release a VBD (Vertex Block Descent) based solver next month, which could be exactly what you need. VBD is an implicit solver that offers stability comparable to XPBD, while providing better convergence with limited iterations, which makes it an ideal solver for simulating stiff materials. This solver seems to suit your needs very well. Please keep an eye on the release notes to get notified when it's out.
Hi Anka, Thanks a lot for the information. I will keep that in mind. In the meantime, do you know of a workaround that can be incorporated to make warp work for the above case?
Thanks!
Maybe you can reduce the resolution of your mesh and use the XPBD simulator. Generally, it is much harder to simulate stiff models with too much degree of freedoms. Also, reducing the time step for XPBD can be much more effective than increasing iterations for XPBD. Both of them increases the computational cost but smaller time step gives better results with the same budget.
Thanks Anka, I will try the suggestion :)
I am trying to figure out how to perform a differentiable FEM simulation of a 'stiff' triangular mesh (V:22, F:44) with contact and friction enabled with triangle_ke = 1.0e6 to perform stably. It is based on https://github.com/NVIDIA/warp/blob/main/warp/examples/optim/example_walker.py modified to add triangles instead of tets.
If I use the SemiImplicitIntegrator provided by Warp, the simulation only works without blowing up when a very small _stepsize is used. To run the simulation for 't' seconds of sim time, I need to maintain (t/_stepsize) states of sim.model leading to running out of memory on GPU.
An easy solution would be to use an unconditionally implicit solver, which XPBD claims to be but without using high enough iterations, it doesn't work as intended and for high iterations, it also results in CUDA out-of-memory error.
Is there something I can try to make this simulation run stably with a bigger _stepsize work with warp?