dilevin / computer-graphics-mass-spring-systems

Computer Graphics Assignment – Mass Spring Systems
1 stars 2 forks source link

Computer Graphics – Mass-Spring Systems

To get started: Clone this repository using

git clone --recursive http://github.com/dilevin/computer-graphics-mass-spring-systems.git

Background

Read Chapter 16.5 of Fundamentals of Computer Graphics (4th Edition).

Read "Fast Simulation of Mass-Spring Systems" [Tiantian Liu et al. 2013]

Mass-Spring Systems

In this assignment we'll consider animating a deformable shape.

We model the shape's physical behavior by treating it as a network of point masses and springs. We can think of our shape as a graph where each vertex is a point mass and each edge is a spring.

Given initial conditions (each point's starting position and starting velocity, if any) we will create an animation following the laws of physics forward in time. In the real world, physics is deterministic: if we know the current state, we can be sure of what the next state will be (at least at the scales we're considering). This will also be true of our physical simulation.

The law that we start with is Newton's second law, which states that the forces acting on a body must equal its mass times its acceleration :

Notice that and are vectors, each having a magnitude and a direction. We will build our computational simulation by asking for this equation to be true for each point mass in our network. The forces acting on the -th point mass are simply the sum of forces coming from any incident spring edge and any external force (such as gravity).

Personifying physical objects, we say that they are at rest when their potential energy is zero. When the object is not at rest then it exerts a force pushing it toward its rest state (elastic force), decreasing its potential energy as fast as possible. The force is the negative gradient of the potential energy.

A simple spring is defined by its stiffness and rest length . Its potential energy measures the squared difference of the current length and the rest length times the stiffness:

The force exerted by the spring on each mass is the partial derivative of the potential energy with respect to the corresponding mass position. For example, for we have

For now, we can postpone expanding , and just recognize that it is a 3D vector.

Our problem is to determine where all of the mass will be after a small duration in time ().

Question: What is a reasonable choice for the value of ?

Hint: 🎞️ or 🖥️

We'll assume we know the current positions for each mass at the current time () and the current velocities . When then we call these the initial conditions of the entire simulation. For , we can still think of these values as the initial conditions for the remaining time.

In the real world, the trajectory of an object follows a continuous curve as a function of time. In our simulation, we only need to know the position of each pass at discrete moments in time. We use this to build discrete approximation of the time derivatives (velocities and accelerations) that we encounter. Immediately, we can replace the current velocties with a backward finite difference of the positions over the small time step:

where is the position at the previous time.

We can also use a central finite difference to define the acceleration at time :

This expression mentions our unknown variables for the first time. We'll soon that based on definition of the potential spring energy above and the acceleration here we can solve for the values of these unknown variables.

Time integration as energy optimization

In the equation , the acceleration term depends linearly on the unknowns . Unfortunately, even for a simple spring the forces depend non-linearly on . This means we have a non-linear system of equations, which can be tricky to satisfy directly.

Question: We've chosen to define as the forces that implicitly depend on the unknown positions at the end of the time step . What would happen if we defined the forces to explicitly depend on the (known) current positions ?

An alternative is to view physics simulation as an optimization problem. We will define an energy that will be minimized by the value of that satisfies . The minimizer of some function will satisfy . So we construct an energy such that :

Keen observers will identify that the first term is potential energy and the second term resembles kinetic energy. Intuitively, we can see the first term as trying to return the spring to rest length (elasticity) and the second term as trying to keep masses moving in the same direction.

Because of the term, minimizing is a non-linear optimization problem. The standard approach would be to apply gradient descent (slow), Gauss-Newton method, or Newton's Method (too complicated for this assignment).

In a relatively recent SIGGRAPH paper "Fast Simulation of Mass-Spring Systems", Tiantian Liu et al. made a neat observation that makes designing an algorithm to minimize quite simple and fast. For each spring , they observe that the non-linear energy can be written as a small optimization problem:

It may seem like we've just created extra work. We took a closed-form expression (left) and replaced it with an optimization problem (right). Yet this optimization problem is small ( is a single 3D vector) and can be easily solved independently (and even in parallel) for each spring (i.e., doesn't depend on etc.). Reading the right-hand side in English it says, find the vector of length that is as close as possible to the current spring vector .

Now, suppose we somehow knew already the vector corresponding to the unknown optimal solution , then treating as a constant we could find the optimal solution by solving the quadratic optimization problem:

The modified energy is quadratic with respect to the unknowns , therefore the solution is found when we set the first derivative equal to zero:

This leads to a straightforward "local-global" iterative algorithm:

For the purposes of this assignment we will assume that we're "satisfied" after a fixed number of iterations (e.g., 50). More advanced stopping criteria could (should) be employed in general.

Matrices

The subtext of this assignment is understanding the computational aspects of large matrices. In the algorithm above, Step 1 is easy and relies on "local" information for each spring.

Step 2 on the other hand involves all springs simultaneously. Matrices are our convenient notation for representing both the linear operators (e.g., in the equation ) and the quadratic forms (e.g., in the energy ).

Let's begin by being precise about some notation. We will stack up all of the unknown mass positions as the rows of a matrix . We can do the same for the known previous time steps' positions .

We can then express the inertial term using matrices:

where computes the trace of (sums up the diagonal entries: ).

and the entries of the square matrix are set to

The potential energy term can be similarly written with matrices. We'll start by introducing the signed incidence matrix of our mass-psring network of vertices and edges . The rows of correspond to an arbitrary (but fixed) ordering of the edges in the network. In a mass-spring network, the edges are un-oriented in the sense that the spring acts symmetrically on its vertices. For convenience, we'll pick an orientation for edge anyway. For the -th edge , we should be sure to use the same orientation when computing and for the following entries of . So, for the -th row of corresponding to edge connecting vertices and we'll assign values:

Using this matrix as a linear operator we can compute the spring vectors for each edge:

We can now write the modified potential energy of in matrix form:

where we stack the vector for each edge in the corresponding rows of .

Combining our two matrix expressions together we can write entirely in matrix form:

Question: Why do we not bother to write out the terms that are constant with respect to ?

We can clean this up by introducing a few auxiliary matrices:

Now our optimization problem is neatly written as:

Recall: The trace operator behaves very nicely when differentiating.

and

Taking a derivative with respect to and setting the expression to zero reveals the minimizer of this quadratic energy:

Since is a square invertible matrix we can solve this system, which we often write as:

Solving as the action of multiplying by a matrix's inverse

From an algorithmic point of view the notation is misleading. It might suggest first constructing Qinv = inverse(Q) and then conducting matrix multiply p = Qinv * b. This is almost always a bad idea. Constructing Qinv be very expensive and numerically unstable.

Instead, we should think of the action of multiplying by the inverse of a matrix as a single "solve" operation: p = solve(Q,b). Some programming languages (such as MATLAB) indicate using operator overloading "matrix division": p = Q \ b.

All good matrix libraries (including Eigen) will implement this "solve" action. A very common approach is to compute a factorization of the matrix into a lower triangular matrix times it's transpose:

Finding this matrix takes time in general.

The action of solving against a triangular matrix is simple forward-/back-substitution and takes time. We can conceptually rewrite our system as with .

A key insight of the Liu et al. paper is that our matrix is always same (regardless of the iterations in our algorithm above and even regardless of the time that we're computing positions for). We can split our solve routine into two steps: precomputation done once when the mass-spring system is loaded in and fast substitution at run-time:

// Once Q is known
L = precompute_factorization(Q)
// ... each time step
// ... ... each iteration
p = back_substitution(transpose(L),forward_substitution(L,b))

Sparse Matrices

For small mass spring systems, at loading time and at runtime may be acceptable. But for even medium sized systems this will become intractable

Fortunately, we can avoid this worst-case behavior by observing a special structure in our matrices. Let's start with the mass matrix . All of the values of this matrix are zero except the diagonal. Storing this as a general matrix we would be storing zeros. Instead, we can acknowlede that this matrix is sparse and store only the non-zeros along the diagonal.

Similarly, the matrix has non-zeros (a and per edge) and the other entries are zero. Furthermore, the result of the product and by extension will mostly contain zeros. The number of non-zeros is in fact . Large mass-spring systems tend to have edges, so we can happily think of the number of non-zeros as .

We've reduced the storage required from to . What's the catch? General (or "dense") matrices can be easily mapped to memory linearly. For a an arbitrary sparse matrix, we need store additional information to know where each non-zero entry is. The most common general approach is to stored a sorted list of values in each column (or row) of the matrix. This is a rather awkward data-structure to manipulate directly. Similar to the pitfalls of bubble sort, inserting values one at a time can be quite slow since we'd have to keep the lists sorted after each operation.

Because of this most sparse matrix libraries require (or prefer) to insert all entries at once and presort non-zeros indices prefer creating the datastructure. Friendly sparse matrix libraries like Eigen, will let us create a list list of triplets for each non-zero and then insert all values.

So if our dense matrix code looked something like:

Afull = zero(m,n)
for each pair i j
  Afull(i,j) += v
end

By convention we use += instead of = to allow for repeated pairs in the list.

then we can replace this with

triplet_list = []
for each pair i j
  triplet_list.append( i, j, v)
end
Asparse = construct_from_triplets( triplet_list )

Warning:

Do not attempt to set values of a sparse matrix directly. That is, do not write:

A_sparse(i,j) = v

Storing only the non-zero entries means we must rewrite all basic matrix operations including (matrix-vector product, matrix addition, matrix-matrix product, transposition, etc.). This is outside the scope of our assignment and we will use Eigen's SparseMatrix class.

Most important to our mass spring system is the solve action discussed above. Similar to the dense case, we can precompute a factorization and use substitution at runtime. For our sparse matrix, these steps will be , with substitution faster and nearly .

Pinned Vertices

Subject to the external force of gravity in our spring networks will just accelerate downward off the screen.

We can pin down vertices (e.g., those listed in b) at their intial positions, by requiring that their corresponding positions values are always forced to be equal to their initial values :

There are various ways we can introduce this simple linear equality constraint into the energy optimization above. For this assignment, we'll use the easy-to-implement penalty method. We will add an additional quadratic energy term which is minimized when our pinning constraints are satisfied:

where the should be set to some large value (e.g., w=1e10). We can write this in matrix form as:

where has one row per pinned vertex with a in the corresponding column.

We can add these quadratic and linear coefficients to and above correspondingly.

Tasks

White List

Black List

Write your dense code first. This will be simpler to debug.

src/signed_incidence_matrix_dense.cpp

src/fast_mass_springs_precomputation_dense.cpp

src/fast_mass_springs_step_dense.cpp

At this point you should be able to run on small examples.

For example, running ./masssprings_dense ../data/single-spring-horizontal.json should produce a swinging, bouncing spring:

If the single spring example is not working, debug immediately before proceeding to examples with more than one spring.

Running ./masssprings_dense ../data/horizontal-chain.json will produce a hanging catenary chain:

Running ./masssprings_dense ../data/net.json will produce a hanging catenary chain:

If you try to run ./masssprings_dense ../data/flag.json you'll end up waiting a while.

Start your sparse implementations by copying-and-pasting your correct dense code. Remove any dense operations and construct all matrices using triplet lists.

src/signed_incidence_matrix_sparse.cpp

src/fast_mass_springs_precomputation_sparse.cpp

src/fast_mass_springs_step_sparse.cpp

Now you should be able to see more complex examples, such as running ./masssprings_sparse ../data/flag.json or ./masssprings_sparse ../data/skirt.json:

Notes for TAs editing the README

This README file is too complex for texify to render. Use readme2tex locally to render the TeX to SVGs.

python -m readme2tex --output README.md README.tex.md --nocdn

sed -i 's/invert_in_darkmode\"/invert_in_darkmode\&sanitize=true\"/g' README.md