The remainder of this thesis is structured as follows. In Chapter 2 I briefly review background material prerequisite for understanding the following chapters. This chapter may be skipped by people already very familiar with fluid simulation in computer graphics. In Chapter 3 we describe our algorithm for tracking time-varying surfaces without any priors on the topology. In Chapter 4 we describe our novel technique for correcting errors in liquid surfaces due to resolution mismatches

by either smoothing the errors or turning them into waves. In Chapter 5 we describe how to formulate the dynamics of a perturbation of the Navier-Stokes equations and how to damp this perturbation to zero so we can re-simulate parts of an existing simulation without wave reflections at the boundary. Finally, in Chapter 6 we summarize and conclude the thesis with a brief overview of the research that has been done concurrent to the work described in this thesis.

**Background**

In this chapter we briefly summarize the most important background knowledge
needed to read the rest of this thesis. The point is to collect, in one place, relevant
information otherwise scattered throughout the scientific literature. The exposition
is*not*meant to be exhaustive or in-depth, rather we try to get the main point across
and leave references for the readers who would like to know more.

**2.1** **Descent methods**

Many problems in computer graphics can be formulated as finding the minimizer of some function that measures the error or deviation from some ideal state. The problems in this thesis are no exception. In Chapter 3 we non-rigidly align one shape with another while minimizing the amount of deformation, and in Chapter 4 we minimize the deviation of a liquid surface from the set of physically valid states.

In this section we will briefly review the minimization algorithms employed in this thesis. We refer to the materials that served as inspiration for this section for a more in-depth exposition [Madsen et al. 2004; Pighin and Lewis 2007].

Abstractly, a minimization problem is the task of finding the global minimizer
**x**^{†}∈R* ^{n}*of a certain problem-specific function

**F**:R

*→R*

^{n}**x**^{†}=arg min

**x∈R**^{n}

**F**(**x**). (2.1)

Finding the global minimizer as in Equation (2.1) is generally intractable for
non-convex**F**, so we often settle for a local minimizer**x*** ^{?}*∈R

^{n}**F**(**x*** ^{?}*)≤

**F**(

**x**) for k

**x**

*−*

^{?}**x**k

*< δ*(2.2) for some suitable

*δ >*0.

The literature on minimization algorithms is extensive and also an active area of research. New application-specific methods that exploit the specific properties

8

of**F** come out all the time. In the following, however, we will focus on a fairly
general class of minimization algorithms called*descent methods*.

Descent methods follow a very simple iterative pattern. You start with an
initial guess **x**_{0} and then proceed to repeat the following steps repeatedly until
convergence, which is determined by |**F**(**x**_{n}_{+1})−**F**(**x*** _{n}*)|

*< "*for some stopping criterion

*" >*0.

1. Choose a descent direction,*i.e.*choose a*∆***x**such that
**F**(**x*** _{n}*+

*α∆*

**x**)

*<*

**F**(

**x**

*). for some*

_{n}*α >*0.

2. Search for the for the best*α >*0such that

**F**(**x*** _{n}*+

*α∆*

**x**)

*<*

**F**(

**x**

*+*

_{n}*α*

^{0}

*∆*

**x**) for

*α*

^{0}

*>*0with|α−

*α*

^{0}|

*< δ.*

3. Update the current guess,**x***n*+1=**x***n*+*α∆***x**.

We will not delve deeper into the details of choosing*α*in step two since it is in
general a nontrivial problem. In the simplest case you can get away with using a
small fixed*α*in each iteration. We refer to the references provided at the beginning
of this section for more details on how to perform such*line search. In the next*
sections we will look at three common ways of choosing*∆***x**in step one.

**2.1.1** **Gradient descent**

The simplest method of choosing*∆***x**arises from a first-order Taylor approximation
of**F**(**x**+*∆***x**)around the current guess**x**.

**F**(**x**+*∆***x**) =**F**(**x**) +**F**^{0}(**x**)∆**x**+*O* k∆**x**k^{2}

=**F**(**x**) +cos*θ*
**F**^{0}(**x**)

k∆**xk**+*O* k∆**xk**^{2}

where**F**^{0}(**x**)is the Jabobian at**x**and*θ* is the angle between**F**^{0}(**x**)and*∆***x**. We see
that for sufficiently smallk∆**x**kthe fastest way of decreasing**F**as a function of*∆***x**
is to set

*∆***x**=−**F**^{0}(*x*) (2.3)

because thencos*θ*=cos*π*=−1. Using the descent direction in Equation (2.3) is
called*gradient descent*or*steepest decent*and is the method we use to reduce our
energy functional in Chapter 4. Since it is based on a first-order approximation of
**F**it converges linearly, which may or may not be fast enough depending on the
application or the particular function under consideration.

**2.1.2** **Newton’s method**

It is possible to do better than linear convergence if we bump our approximation
of**F**(**x**+*∆***x**)to second-order

**F**(**x**+*∆***x**) =**F**(**x**) +**F**^{0}(**x**)∆**x**+1

2(∆**x**)^{T}**F**^{00}(**x**)∆**x**+*O* k∆**x**k^{3}
.
where**F**^{00}is the Hessian. Now, let

**F**ˆ(∆**x**)^{def}=**F**(**x**) +**F**^{0}(**x**)∆**x**+1

2(∆**x**)^{T}**F**^{00}(**x**)∆**x**

be the truncated second-order expansion of**F**around**x**and consider sufficiently
small*∆***x**such thatˆ**F**≈**F**. We are looking for the*∆***x**that decreases**F**ˆthe fastest.

This minimizer can be found by examining the critical points ofˆ**F**
0=dˆ**F**(∆**x**)

Using the descent direction in Equation (2.4) is called*Newton’s method*. Assuming
that the Hessian**F**^{00}is positive definite and that we are sufficiently close to**x*** ^{?}*,
New-ton’s method converges quadratically, which may be much faster than the linear
convergence of gradient descent. The cost we pay for the improved convergence is
that we now have to invert the Hessian.

**2.1.3** **Gauss-Newton**

One downside of Newton’s method is that we need second-order derivatives.

These can be expensive to compute and numerically problematic depending on
the smoothness of**F**. We can side-step this problem if**F**can be written as a sum of
squares

In this case it can be shown that

**F**^{0}(**x**) =**f**^{0}(**x**)^{T}**f**(**x**) (2.6)
**F**^{00}(**x**) =**f**^{0}(**x**)^{T}**f**^{0}(**x**) +**f**(**x**)^{T}**f**^{00}(**x**) (2.7)
If we replace**f**in Equation (2.5) with its first-order Taylor expansion

ˆ**f**(∆**x**)^{def}=**f**(**x**) +**f**^{0}(**x**)∆**x**
we instead get

**F**^{0}(**x**) =ˆ**f**^{0}(**x**)^{T}ˆ**f**(**x**) (2.8)
**F**^{00}(**x**) =ˆ**f**^{0}(**x**)^{T}ˆ**f**^{0}(**x**) (2.9)

0 2 4 6 8 10 1.0

0.5 0.0 0.5 1.0

|U_{0}|

T=2π/ω λ=2π/k

t=0t=1

Figure 2.1: Illustration of the wave number *k*, the angular frequency *ω*and the
amplitude|*U*_{0}|of a plane wave*e** ^{i(k x−ωt)}*.

where the only difference to Equation (2.7) is that the term involving the Hessian
has dropped out. Finally, if we plug Equations (2.8) and (2.9) into Equation (2.4)
we get the*Gauss-Newton*step

*∆***x**=−ˆ**f**^{0}(**x**)^{T}ˆ**f**^{0}(**x**)−1ˆ**f**^{0}(**x**)^{T}ˆ**f**(**x**).

The approximation we made by usingˆ**f**in place of**f**corresponds to a linearization
of**f**(**x**+∆**x**)about**x**and in general we can no longer expect quadratic convergence.

In practice, Gauss-Newton often performs much better than gradient descent and
sometimes even similarly to Newton, especially if**f**is not*too*non-linear. We apply
Gauss-Newton to our non-linear deformation energy in Chapter 3.