• No results found

The Stochastic Transport Equation with Singular Coefficients

N/A
N/A
Protected

Academic year: 2022

Share "The Stochastic Transport Equation with Singular Coefficients"

Copied!
46
0
0

Laster.... (Se fulltekst nå)

Fulltekst

(1)

The Stochastic Transport Equation with Singular Coefficients

by

EMIL HUSTER

THESIS for the degree of

MASTER OF SCIENCE

(Master i modellering og dataanalyse, studieretning statistikk og dataanalyse.)

Faculty of Mathematics and Natural Sciences University of Oslo

December 2014

Det matematisk- naturvitenskapelige fakultet Universitetet i Oslo

(2)

Abstract

This master thesis gives an extensive survey of fundamental concepts in stochastic analy- sis like Itô’s stochastic calculus and Malliavin calculus. These tools form the framework to numerically study the solutions of the stochastic transport equation with singular coefficients. The method used in the numerical simulation is based on the concepts of brackets of stochastic processes introduced in Eisenbaum [1].

(3)

Preface

The topic of this master thesis originated from my background in physics and my desire to delve into stochastic analysis. It is the product of my struggles to complete a master of science in statistics.

The topic I write about here is in itself an interesting one. It is well known that ordinary differential equations (ODE’s) and partial differential equations (PDE’s) in general do not have a unique solution if the coefficient is not Lipschitz continuous. In many cases there might not even be a solution at all for differential equations with such coefficients.

As an example, consider the ODE dXt

dt =b(t, Xt), X0 = 0, where b(t, x) = 2 sign(x)a

|x| is non-Lipschitzian. It is straightforward to check that Xt = 0is a solution for allt. One can also verify thatXt =±t2 are solutions. Therefore, the solution in the case of the ODE above is not unique.

However, by superposing the ODE above by a small noise one obtains the correspond- ing stochastic differential equation (SDE)

dXt =b(t, Xt)dt+ϵdBt, X0 =x,

where ϵ > 0 and Bt is Brownian motion. This SDE is well-posed, meaning that it possesses a unique strong solution X· — no matter how small ϵ is, if b is only bounded and measurable. This strange result was first discovered by Zvonkin [7].

In the case of PDE’s, however, the question whether adding noise has a regularising effect is difficult to answer, and only few examples can be found in the literature. Yet, one example is the linear transport equation perturbed by a Brownian motion. It is given by the stochastic partial differential equation (SPDE)

tu(t, x) +b(t, x)∂xu(t, x) +∂xu(t, x)◦dBt = 0

u(0, x) =u0(x), (˚)

where the added noise is in the form of a Stratonovich integral xu(t, x)◦dBt and the coefficientbis bounded and measurable. This SPDE was studied in Mohammed et al. [5]

where the authors show that it allows the existence of a Malliavin differentiable unique solutionu under certain conditions on the initial datau0.

The purpose of this master thesis is to present an extensive survey of the basic concepts of stochastic analysis and Malliavin calculus as means to study the SPDE (˚). Also, this thesis contains a numerical study that involves the simulation of solutions to (˚) for non-Lipschitzian coefficientsb, as well as a brief study of the convergence of solutions uϵ for ϵ→0 when xu(t, x)◦dBt is substituted by ϵ∂xu(t, x)◦dBt.

All definitions, lemmas, propositions and theorems in this thesis are obtained from my references. Hence, I have only provided complete proofs for some of the results where

(4)

the proofs have been short. In addition, I have made sketches of proofs for the more important theorems, which would require longer proofs. In any case can the complete proofs be found in my references or in references within my references.

Finally, I want to express my gratitude to my partner Iselin who has been most patient in taking care of our new born son Leon while I was occupied with this thesis, Leon who is such a happy and content little boy, my mother Kaja who provided me with valuable guidance as to structuring this thesis and proofread it carefully, and last but not least my supervisor Frank who has patiently answered my naive questions and motivated me when I felt lost and confused.

Emil Huster December 2014

(5)

Contents

Preface . . . 3

1 Introduction . . . 6

2 Itô’s Stochastic Calculus . . . 8

2.1 Basic Concepts . . . 8

2.2 Brownian Motion . . . 9

2.3 Itô Integrals . . . 13

2.4 Expanding the Class of Integrands . . . 17

2.5 Local Martingales . . . 19

2.6 The Itô Formula . . . 20

2.7 The Itô Representation Theorem . . . 21

2.8 The Martingale Representation Theorem . . . 22

2.9 Existence and Uniqueness of Stochastic Differential Equations . . 22

2.10 Girsanov Theorem . . . 23

3 Malliavin Calculus . . . 24

3.1 Wiener-Itô Chaos Expansion . . . 24

3.2 The Skorohod Integral . . . 29

3.3 The Malliavin Derivative . . . 30

3.4 The Clark-Ocone Formula . . . 32

4 Application to the Stochastic Transport Equation . . . 33

4.1 Construction of Solutions . . . 33

4.2 Simulation of Solutions . . . 37

4.3 Closing Thoughts . . . 39

Appendix . . . 43

(6)

1 Introduction

Some research on the Internet reveals that the term “transport equation” refers to at least two related expressions. Wikipedia refers to the convection-diffusion equation, also known as the advection-diffusion or drift-diffusion equation. Other sites refer to what Wikipedia calls the advection equation.

The advection equation describes transport of “something” in a fluid by the fluid’s bulk motion. This is at least how the English article on Wikipedia.org describes advection; “a transport mechanism of a substance or conserved property by a fluid due to the fluid’s bulk motion.” An example of this is the transport of pollutants in a river by the flow of bulk water. Advection cannot therefore happen in solids, as it requires currents in the fluid.

Advection is sometimes used as a synonym for convection, but according to some sources on the Internet convection is the combination of advective transport and diffusive transport.

In meteorology, advection is describing the transport of heat, moisture and vorticity by the atmosphere or the ocean. Hence, this is an important concept in weather forecasting.

However, I have noticed that in literature concerning making random perturbations or adding noise to the advection equation, it is referred to as the transport equation.

That is the term I will use here, too. It is anyway safe to say that the transport equation shows up in some form or another in many situations in physics where there is some conservation of energy, mass, charge, etc. involved.

During the last few decades, a great effort has been made to study the stochastic transport equation (STE). I refer to the papers by Mohammed et al. [5] and Flandoli et al. [2], and the references therein. More specific, STE’s with discontinuous coefficients and driven by noise like Brownian motion, have been an important area of study. My guess is that the STE is so much studied because its solutions is given by the unique strong solutions of the stochastic differential equation (SDE)

dXtx =b(t, Xtx)dt+dBt, t 0, X0x =x.

This master thesis has the following structure:

In Section 2 I review most of the fundamental theory about stochastic calculus.

Section 3 is concerned with the basic theory and construction of the Malliavin calculus through the approach of Wiener-Itô chaos expansion.

Finally, in Section 4 I discuss some results presented in [5] concerning the unique- ness and existence of Malliavin differentiable solutions for the STE with singular coefficients. I also perform a numeric study of the STE with singular coefficients.

The algorithm for the simulation is based on concepts of brackets of stochastic processes that were introduced in Eisenbaum [1].

Figure 1.1 gives a visual overview of the most important definitions and theorems in this thesis.

(7)

Theorem 2.2.5.

Theorem 2.2.7.

Construction

Def. 2.3.4. (Itô integrable functions) Def. 2.3.7. (Itô integral)

Itô calulus

Def. 2.4.1. (Larger class of integrands)

Theorem 2.5.3. (Local martingale property) Theorem 2.6.2. (Itô formula)

Theorem 2.7.2. (Itô representation)

Theorem 2.9.1. (Existence and uniqueness) Theorem 2.10.2. (Girsanov)

Malliavin calculus

Def. 3.1.4. (Iterated Itô integral) Theorem 3.1.7. (Chaos expansion) Def. 3.2.1. (Skorohod integral)

Def. 3.3.1. (Malliavin derivative)

Theorem 3.3.8. (Fundamental theorem) Brownian motion

Def. 2.2.1.

Brownian motion

Theorem 4.1.2.

Theorem 4.1.4.

Figure 1.1: An overview of most important theorems in this thesis.

(8)

2 Itô’s Stochastic Calculus

In this section I will present the theory of Itô’s stochastic calculus. It was developed by K. Itô in his 1944 paper, and constitutes the basis for solving stochastic differential equations. One of its most notable applications is to the Black-Scholes theory in finance.

I start by describing some basic concepts that are necessary for the following sections.

Then I look at the definition and construction of Brownian motion. This is necessary for the construction of the Itô integral. Further, I will discuss the Itô formula, the Itô representation theorem and the martingale representation theorem. I will present a theorem for existence and uniqueness, and strong and weak solutions of stochastic differential equations. Finally, I will comment on the Girsanov theorem.

2.1 Basic Concepts

To start with I give some basic definitions necessary for the further presentation. In the following, let (Ω,F, P) denote a complete probability space. That is, F is a σ-algebra onΩ and P :F →[0,1]is a probability measure on (Ω,F). It is complete in the sense that F contains all subsets of Ω with P-outer measure zero. Let B denote the Borel σ-algebra onΩ generated byU, the collection of all open subsets of Ω. ThenB ∈ B are called Borel sets.

Definition 2.1.1. A random variable is an F-measurable function X : Ω Rn. The distribution of X is the probability measure µX induced by the random variable and defined by

µX(B) :=P(X1(B)).

Definition 2.1.2. A stochastic process is a parameterised collection of random variables {Xt}tT

defined on a probability space (Ω,F, P) and assuming values in Rn.

The parameter space T in the definition above can be a closed or half-open interval on the real line, i.e. [a, b] or [0,), or even subsets of Rn for n 1. For every fixed t∈T we have a random variable

ω7→Xt(ω), ω Ω.

Fixingω Ω, however, gives the path of Xt

t7→Xt(ω), t ∈T.

Hence, the parameter t is usually interpreted as time, and Xt(ω) as the position of a particleω at a given time t.

Kuo [4] defines Xt on the product space T ×Ω and uses the notation X(t, ω). With this notation the process can be viewed as a function of two variables

X(t, ω) :T ×Rn.

This is often a convenient interpretation, since it is crucial in stochastic analysisX(t, ω) being jointly measurable in(t, ω), see Øksendal [8].

(9)

2.2 Brownian Motion

The purpose of this section is to briefly treat the mathematical definition and construc- tion of Brownian motion, a type of “noise” which when added turns a differential equation into a stochastic differential equation.

Brownian motion is named after the Scottish botanist Robert Brown, who in 1828 studied pollen grains suspended in water with a microscope. He observed that the pollen grain seemed to move around randomly. It later turned out that the motion was caused by the random bombardment of the water molecules (see page 47 in the book by Karatzas and Shreve [3] and page 11 in the book by Øksendal [8]).

When contemplating the physics behind Brownian motion, it is reasonable that the displacement of the pollen particle is a result of the net force of all the water molecules colliding with it at any instant. Since there are very many water molecules colliding with the pollen particle all the time, the law of large numbers can be applied. Thus it is reasonable for the particle’s movement to be described by the normal distribution.

Since the water molecules are surrounding the pollen particle (and assumed that the drift is zero), the net force should not have any particular direction. The displacement of the particle from one instant to the next should be influenced by the time that has passed between the sampling of the position. Based on this reasoning one might conclude that the mean displacement is zero and that the variance is some function of the time between succeeding samples of the position. Moreover, the position at one instant should be independent of previous positions. Finally, the particle can not jump from one spot to a another.

This motivates the following definition which is obtained from Kuo [4]:

Definition 2.2.1 (1-dimensional Brownian motion). A stochastic process {Bt}t0 on a probability space (Ω,F, Px) is called Brownian motion if it satisfies the conditions:

1. Px(ω :B0(ω) = x) = 1

2. For any 0≤s < t, the random variable Bt−Bs is normally distributed with mean x and variance t−s, i.e. fora < b

Px(Bt−Bs[a, b]) = (2π(t−s))1/2

b

a

e(yx)2/2(ts)dy (2.2.1) 3. Bt has independent increments, i.e. for any 0 t1 < t2 < . . . < tk, the random

variables

Bt1, Bt2 −Bt1, · · ·, Btk−Btk1, (2.2.2) are independent.

4. The sample paths of Bt are continuous for almost all ω∈Ω, i.e.

P(ω :B(·, ω) is continuous) = 1 (2.2.3)

(10)

Following are some properties of Brownian motion. The first one is Proposition 2.2.1.

in [4]:

Proposition 2.2.2. For any t > 0, Bt is normally distributed with mean zero and variance t. For any s, t 0, E[BtBs] = min(s, t).

Proof. Since Bt = Bt−B0 it follows from (2) in Definition 2.2.1 that EBt = 0 and VarBt =t. To show that E[BsBt] = min(s, t) assume first that s < t. Then using

E[BsBt] =E[Bs(Bt−Bs) +Bs2] =E[Bs(Bt−Bs)] +EBs2 = 0 + VarBs=s, (2.2.4) where I have usedVarBs=EBs2+ (EBs)2 =EBs2. Similarly for t < s.

The next proposition is Proposition 2.2.3. in [4]:

Proposition 2.2.3 (Translational invariance). For fixed t0 0 the stochastic process B˜t=Bt+t0 −Bt0 is also Brownian motion.

Proof. Since both Bt+t0 and Bt0 start in zero, B˜t also starts in zero, so property (1) is satisfied. Obviously, the same applies for property (4). For any s < t,

B˜t−B˜s=Bt+t0−Bst0, (2.2.5) so by condition (2) B˜t−B˜s is normally distributed with mean zero and variance (t+ t0)(s+t0) =t−s. Hence, B˜t satisfies condition (2).

Finally, assumet0 >0. Then for any0≤t1 < t2 <· · ·< tn, we have0< t0 ≤t1+t0 <

t2 +t0 < · · · < tn +t0. By condition (3), B(tk −t0)−B(tk1 −t0), k = 1,2, . . . , n, are independent random variables. Hence, by Equation (2.2.5), the random variables B(t˜ k)−B˜(tk1)are independent and soB˜t satisfies condition (3) in Definition 2.2.1.

The last proposition before I present the construction of Brownian motion is Propo- sition 2.2.4. in [4]:

Proposition 2.2.4 (Scaling invariance). For any real number λ > 0, the stochastic process B˜t =Bλt/?

λ is also a Brownian motion.

Proof. Conditions (1), (3) and (4) of Definition 2.2.1 are easily verified for the stochastic process B˜t. Regarding condition (2), note that for any s < t,

B˜t−B˜s = 1

?λ(Bλt−Bλs), (2.2.6)

so B˜t−B˜s is normally distributed with mean zero and variance λ1(λt−λs) = t−s.

Hence, B˜t satisfies condition (2).

(11)

Construction of Brownian Motion

There are at least three methods for constructing Brownian motion. The first one is due to N. Wiener. The second one is based on Kolmogorov’s extension and continuity theorem, and the last one is due to P. Lévy. In the following, I will make account for the second method, using Kolmogorov’s theorems.

Now I will explain the construction of 1-dimensional Brownian motion. The n- dimensional construction is not very different and can be found in for instance [8].

I start as in Section 2.2., page 49 in the book by Karatzas and Shreve [3]: Let RT denote the set of all real-valued functions on T = [0,). If F ∈ B(Rn), the smallest σ-algebra generated by Rn, then a set on the form

RT : (ω(t1), ω(t2), . . . , ω(tn))∈F}, (2.2.7) where ti T, i = 1, . . . n, is called an n-dimensional cylinder set in RT. I will denote the smallest σ-algebra containing all finite-dimensional cylinder sets (2.2.7) in RT, by B(RT).

Next, the finite-dimensional distributions of a stochastic process {Xt}tT on a proba- bility space(RT,B(RT), P) is defined as the measures

µt1,...,tk(F1× · · · ×Fk) = P(Xt1 ∈F1, . . . , Xtk ∈Fk), (2.2.8) whereF1, . . . , Fk ∈ B(Rn). See Øksendal [8, page 11].

The family of probability measurest1,...,tn}n1,tiT obtained from different sequences (t1, t2, . . . , tn), ti ∈T, n≥1, is said to be consistent if the two following conditions are satisfied (see [4, 8]):

Let F1, F2, . . . , Fn ∈ B(Rn).

1. For all t1, t2, . . . , tn∈T, n≥1 and all permutations σ on {1,2, . . . , n}

µtσ(1),...,tσ(n)(F1×. . .×Fn) = µt1,...,tn(Fσ(1)×. . .×Fσ(n)), (2.2.9) 2. For all m 1,

µt1,...,tn(F1×. . .×Fn) = µt1,...,tn,tn+1,...,tn+m(F1×. . .×Fn×R×. . .×R), (2.2.10) where the set on the right hand side has a total of k+m factors.

Referring to page 50 in Karatzas and Shreve [3], a family of consistent finite-dimensional distributions can be constructed from a given probability measure on(RT,B(RT)). The next theorem shows the converse, i.e. that a probability measure on(RT,B(RT))can be constructed from a consistent family of finite-dimensional distributions. This provides the facilities for constructing Brownian motion.

Theorem 2.2.5 (Kolmogorov’s extension theorem). Suppose{µt1,...,tn}n1,tiT is a con- sistent family of finite-dimensional distributions. Then there exists a probability space (RT,B(RT), P) such that

µt1,...,tk(F1× · · · ×Fk) = P(Xt1 ∈F1, . . . , Xtk ∈Fk), (2.2.11) for every ti ∈T, n≥1, Fi ∈ B(Rn).

(12)

For the proof of this theorem see e.g. [8] and the references therein.

To wrap up the first part of the construction of Brownian motion, the following is my interpretation of [8, 3, 4].

Define a stochastic process

Bt(ω) :=ω(t), t≥0, ωRT on the probability space (RT,B(RT), P).

Then define the Gaussian kernel as p(t, x, y) := (2πt)1/2exp

ˆ

(x−y)2 2t

˙

, x, y R, t >0. (2.2.12) Whenever 0 = t1 ≤ · · · ≤ tn, let µt1,...,tn(F1 × · · · ×Fn) be the finite-dimensional distribution ofBt onRn, given by

µt1,...,tn(F1× · · · ×Fn) =

F1×···×Fn

p(t1,0, x1)p(t2−t1, x1, x2)· · ·p(tn−tn1, xn1, xn)dx1· · ·dxn. (2.2.13) Here I make use of the convention that p(0, x, y)dy = δx(y), the unit point mass at x (see [8, 4]).

By using Equation (2.2.9) to extend this definition to all finite sequencest1, . . . , tn, and knowing that Equation (2.2.10) holds since ∫

Rp(t, x, y)dy = 1 for allt 0, it follows by Kolmogorov’s extension theorem that Equation (2.2.8) holds for the finite-dimensional distribution µt1,...,tn of Bt on the measurable space (RT,B(BT)) with the probability measureP.

The only thing that remains is to show property (4) in Definition 2.2.1, i.e. that the sample paths are continuous. Following [8], I start with the next definition.

Definition 2.2.6. Suppose {Xt} and {Yt} are stochastic processes on (Ω,F, P). Then we say that {Xt} is a version of{Yt} if

P( :Xt(ω) =Yt(ω)}) = 1 for all t. (2.2.14) If Xt is a version of Yt, then Xt and Yt have the same finite-dimensional distributions.

Now continuity can be shown with the aid of Kolmogorov’s continuity theorem as stated in [8]

Theorem 2.2.7 (Kolmogorov’s continuity theorem). Suppose that the process X = {Xt}t0 satisfies the following condition: For all T > 0 there exist positive constants α, β, D such that

E[|Xt−Xs|α]≤D|t−s|1+β, 0≤s < t≤T. (2.2.15) Then there exists a continuous version of X.

With α = 4, β = 1, D = 3, Equation (2.2.15) holds for Brownian motion. This concludes that Brownian motion exists.

Having established the existence of Brownian motion as defined in Definition 2.2.1, it is now safe to construct an integral involving Brownian motion as the integrator.

(13)

2.3 Itô Integrals

In this section I will outline how to define the Itô integral

b a

f(t, ω)dBt(ω) (2.3.1)

for0≤a < b. With the definition of this integral, we are one step closer to solve SDE’s involving Brownian motion.

The procedure for constructing the Itô integral resembles the procedures for construc- tion of the Riemann-Stieltjes integral and the Lebesgue integral. In order to develop integration with respect to Brownian motion, one first defines the integral of certain simple functions in a Riemann-Stieltjes sense. Then one extends the definition to more general integrands by an approximation procedure, as in the Lebesgue recipe. Central for the approximation procedure is the Itô isometry.

As it is pointed out on page 37 in [4], the motivation behind Itô’s theory of stochastic integration can be to acquire a direct method to construct diffusion processes as solutions of SDEs. However, it can also be motivated from the viewpoint of martingales.

As mentioned above, the first thing to do is to define the Itô integral for a class of simple functions. Let us assume that f is a step function and has the form

ϕ(t, ω) =

k0

ek(ω)1[tk,tk+1)(t), (2.3.2) where1 is the indicator function. The indices

tk =tk,n =





k/2n if a ≤k/2n≤b a if k/2n< a b if k/2n> b

Subsequently, it is reasonable to define the integral for functions (2.3.2) in the Riemann- Stieltjes sense ∫ b

a

ϕ(t, ω)dBt(ω) :=∑

k0

ek(ω)[Btk+1 −Btk]. (2.3.3) However, the functions ek must fulfil some conditions in order for the integral (2.3.3) to be unambiguous. This is seen in [8, Example 3.1.1.] where the computation of the integral (2.3.3) of two simple functions (2.3.2) with ek(ω) = Btk(ω) in one case and ek(ω) =Btk+1(ω) in the other case shows that the choice of time point tk [tk, tk+1] in

the approximation ∑

k

f(tk, ω)[Btk+1−Btk](ω)

is not arbitrary. In fact, Theorem 4.1.2. in [4] shows that the quadratic variation of Brownian motion, defined as the sum

n k=0

(B(ti+1)−B(ti))2, is nonzero.

(14)

There are two common choices for the points tk mentioned above:

1. tk =tk, i.e. the left end point, leads to the Itô integral

b

a

f(t, ω)dBt(ω), and

2. tk = (tk+tk+1)/2, i.e. the mid point, leads to the Stratonovich integral

b a

f(t, ω)◦dBt(ω).

In this context I will use tk = tk. This choice leads to the important consequence that the Itô integral is a martingale, see Definition 2.3.3.

In order to be able to arrive at an unambiguous definition of the stochastic integral, the integrand needs to be part of a special class of functions. I will, however, need to define filtrations and adapted processes first.

Definition 2.3.1 (Filtration). A filtration on [0, T], T > 0, is an increasing (or non- decreasing) family {Ft : t 0} of sub-σ-algebras of F, such that Fs ⊂ Ft ⊂ F for 0 s < t < ∞. The filtration FtX := σ{Xs : 0 s t} generated by a stochastic process Xs, is the smallest σ-algebra for which Xs is measurable with respect to the filtration, for every s [0, t]. A filtration is said to be right continuous if

Ft=

n>0

Ft+n1, ∀t [a, b).

Definition 2.3.2 (Adapted process). Let {Nt}t0 be an increasing family of σ-algebras of subsets of Ω. A process g(t, ω) : [0,∞)× Rn is called Nt-adapted if for each t≥0 the function ω →g(t, ω) is Nt-measurable.

Note that every process Xt is adapted to the filtration {FtX}.

The next definition is not needed for the construction of the Itô integral, but I still list it here for future reference.

Definition 2.3.3(Martingale).A stochastic process Xt ={Xt}t0 is called a martingale with respect to the filtration Ft if

1. Xt is Ft-measurable for all t, 2. E[|Xt|]<∞ for all t, and 3. E[Xt|Fs] =Xs for all s≤t.

(15)

In the following, let the Brownian motion Bt(ω)be fixed, and let Ft=σ{Bs : 0≤s≤t}

be the completeσ-algebra generated by {Bs}0st. ThenBt is adapted to the filtration {Ft :t∈[0, T]}.

I will now present the three steps involved in the construction of the Itô integral for the following class of functions:

Definition 2.3.4. Let L2ad([a, b]×Ω) be the class of functions f(t, ω) : [0,)×R

such that

1. (t, ω) f(t, ω) is B × F-measurable, where B denotes the Borel σ-algebra on [0,).

2. f(t, ω) is Ft-adapted.

3. E[b

a f(t, ω)2dt]<∞.

The first step is to construct the Itô integral for step functions f L2

ad([a, b]×Ω) given by Equation (2.3.2), i.e.

f(t, ω) =

n i=0

ξi(ω)1[ti1,ti)(t),

whereξi is Fti-measurable andE[ξi2]<∞. Now I define I(f) :=∑

k0

ξk(ω)[Btk+1−Btk](ω).

The following lemma is important for the upcoming steps:

Lemma 2.3.5 (Itô isometry).

EI(f)2

=

b

a

E[f2(t, ω)]dt (2.3.4)

Proof. For the proof, see e.g. [8, 4].

To be able to define the Itô integral for general stochastic processesf ∈L2

ad([a, b]×Ω), a lemma is needed. This is the second step.

Lemma 2.3.6. Supposef ∈L2ad([a, b]×Ω). Then there exists a sequence{fn(t) :n 0} of elementary stochastic processes in L2

ad([a, b]×Ω)such that

nlim→∞

b

a

E[|f(t)−fn(t)|2]dt= 0. (2.3.5)

(16)

Proof. See Lemma 4.3.3. in [4].

The final step is to use Lemma 2.3.5 and Lemma 2.3.6 to define the Itô integral for generalf ∈L2

ad([a, b]×Ω). See page 47 in [4] for details.

Definition 2.3.7 (The Itô integral). Let f ∈L2

ad([a, b]×Ω). Then the Itô integral of f over the interval [a, b] is defined by

b a

f(t, ω)dBt(ω) := lim

n→∞

b a

ϕn(t, ω)dBt(ω), (2.3.6) where the limit is in L2(P) and n} is a sequence of step functions such that

E

„∫ b a

(f(t, ω)−ϕn(t, ω))2dt ȷ

0 as n→ ∞. (2.3.7)

As I mentioned in the beginning of this section, the Itô integral is a martingale. This is stated in the following theorem.

Theorem 2.3.8(Martingale property). Supposef ∈L2

ad([a, b]×Ω). Then the stochastic process

Xt=

t a

f(s, ω)dBs, a≤t≤b, (2.3.8) is a martingale with respect to the filtration {Ft:a≤t≤b}.

Proof. Theorem 4.6.1. in Kuo [4]

The next important step is to confirm that the Itô integral has continuous sample paths.

Theorem 2.3.9 (Doob martingale inequality). Let Xt, a≤t≤b, be a right continuous martingale. Then for any ϵ >0,

P( sup

atb

|Xt|≥ϵ)≤ 1

ϵE|Xb|. (2.3.9)

Proof. The proof can be found in Theorem 4.5.1. in Kuo [4].

Theorem 2.3.10(Continuity). Supposef ∈L2

ad([a, b]×Ω). Then the stochastic process Xt =

b

a

f(s, ω)dBs, a ≤t ≤b,

is continuous, that is, almost all of its sample paths are continuous functions on the interval [a, b].

Proof. See Theorem 4.6.2. in Kuo [4]

This section has established the Itô stochastic integral for a particular class of inte- grands. It is not a big class, but the functions included there ensures that the Itô integral has the convenient martingale property.

(17)

2.4 Expanding the Class of Integrands

I will now show how to extend the class of integrandsL2

ad([a, b]×Ω)to a larger class of integrands. This larger class is defined as follows:

Definition 2.4.1. Let L2([a, b],Ω) denote the class of functions f(t, ω) fulfilling the following conditions:

1. f(t) is adapted to the filtration {Ft}, 2.b

a|f(t)|2dt <∞ almost surely.

As it is pointed out in [4, Section 5.1]: “Condition (2) means that almost all sample paths are functions in the Hilbert space L2[a, b]. Hence the map ω 7→ f(·, ω) is a measurable function from Ωinto L2[a, b].”

To see that L2

ad([a, b]×Ω) is included in L2([a, b],Ω), recall that a function f(t, ω) is in L2

ad([a, b]×Ω) if it is {Ft}-adapted and ∫b

a E(|f(t)|2)dt < . But by the Fubini theorem

E

b

a

|f(t)|2dt =

b

a

E(|f(t)|2)dt <∞, so∫b

a|f(t)|2dt <∞ almost surely.

The extension requires a couple of lemmas which I have copied from Kuo [4].

Lemma 2.4.2. Let f ∈ L2([a, b],Ω). Then there exists a sequence {fn} in L2([a, b],Ω) such that

nlim→∞

b a

|f(t)−fn(t)|2dt= 0 almost surely, and hence also in probability.

Proof. See [4, Lemma 5.1.5.].

The next lemma is “a key lemma” for achieving the goal of this section.

Lemma 2.4.3. Let f(t) be a simple stochastic process in L2

ad([a, b] ×Ω). Then the inequality

P ˆ∫ b

a

|f(t)dB(t)|> ϵ

˙

C ϵ2 +P

ˆ∫ b

a

|f(t)|2dt > C

˙

holds for any positive constants ϵ and C.

Proof. The proof can be found in Lemma 5.2.2. in [4].

Then an approximation lemma is also needed.

Lemma 2.4.4. Let f ∈ L2([a, b],Ω). Then there exists a sequence {fn(t)} of simple stochastic processes in L2

ad([a, b]×Ω) such that

nlim→∞

b a

|f(t)−fn(t)|2dt= 0, (2.4.1) in probability.

(18)

Proof. See Lemma 5.3.1. in [4].

The tools to define ∫ b a

f(t)dB(t) f ∈ L2([a, b],Ω)

are now at hand. By Lemma 2.4.4 there is a sequence {fn(t)} of simple stochastic processes in L2

ad([a, b]×Ω) such that

nlim→∞

b a

|f(t)−fn(t)|2dt = 0, in probability.

Hence, for every n, the stochastic integral I(fn) =

b a

fn(t)dB(t)

is defined by Definition 2.3.7. Letf =fn−fm in Lemma 2.4.3 with C =ϵ3/2 to get P(|I(fn)−I(fm)|> ϵ)≤ ϵ

2+P ˆ∫ b

a

|fn(t)−fm(t)|2dt > ϵ 2

˙

. (2.4.2) The second term satisfies, according to [4], the inequality

P ˆ∫ b

a

|fn(t)−fm(t)|2dt > ϵ 2

˙

≤P ˆ∫ b

a

|f(t)−fn(t)|2dt > ϵ3 8

˙ +P

ˆ∫ b

a

|f(t)−fm(t)|2dt > ϵ3 8

˙

, (2.4.3) by applying the inequality |u +v|2 2(|u|2+|v|2). Using Lemma 2.4.4 on the above inequality, we obtain that

n,mlim→∞P ˆ∫ b

a

|fn(t)−fm(t)|2dt > ϵ3 2

˙

= 0.

Consequently there is anN >1 such that P

ˆ∫ b

a

|fn(t)−fm(t)|2dt > ϵ3 2

˙

< ϵ 2

for all n, m N. Reviewing Equation (2.4.2) with regards to the above result, we see that

P(|I(fn)−I(fm)|> ϵ)< ϵ, n, m≥N.

This means that the sequence{I(fn)}n=1 converges in probability, and we can define

b

a

f(t)dB(t) = lim

n→∞I(fn), in probability.

(19)

By checking that the limit is independent of the choice of the sequence {fn}n=1 one finds that the stochastic integral above is well-defined. In conclusion, we have defined the stochastic integral ∫ b

a

f(t)dB(t), f ∈ L2([a, b],Ω).

In general, however, the process Xt=

t

a

f(s, ω)dBs(ω), f ∈ L2([a, b],Ω),

will no longer be a martingale since its expectation is not necessarily finite.

2.5 Local Martingales

The purpose of this section is to provide means in order to mend the loss of the martingale property. The first necessary step is to define stopping times.

Definition 2.5.1(Stopping time). A random variable τ : Ω[a, b] is called a stopping time with respect to a filtration {Ft:a≤t≤b} if

:τ(ω)≤t} ∈ Ft

for all t∈[a, b].

Consider now the process Xt=

t a

f(s, ω)dBs(ω) =

b a

1[a,t](s)f(s, ω)dBs(ω), a ≤t≤b, (2.5.1) where the integrand belongs to L2([a, b],Ω)for any t [a, b].

Let us now define the stochastic process fn for each n by fn(t, ω) =

{

f(t, ω), if ∫t

a|f(s, ω)|2ds≤n

0 otherwise,

and the random variableτn by τn(ω) =

{ inf

{ t:∫t

a|f(s, ω)|2ds > n }

, if {t :· · ·} ̸=∅, b, if {t :· · ·}=∅.

This random variable is a stopping time for eachn by [Example 5.4.3. in 4]. By substi- tutingt in Equation (2.5.1) with t∧τn = min(t, τn)and realising that

1[a,tτn(ω)](s)f(s, ω) =1[a,t](s)fn(s, ω), for almost all ω, we deduct that fn L2

ad([a, b]×Ω). By Theorem 2.3.8 the stochastic process Xtτn is a martingale.

Additionally, the sequence n} is monotonically increasing andτn→b almost surely asn → ∞.

(20)

Definition 2.5.2 (Local martingale). An {Ft}-adapted stochastic processXt, a≤t ≤b, is called a local martingale with respect to {Ft} if there exists a sequence of stopping times n}n=1 such that

1. τn increases monotonically to b almost surely as n → ∞,

2. for each n, Xtτn is a martingale with respect to {Ft:a≤t ≤b}. The next theorem is stated without proof.

Theorem 2.5.3. Let f ∈ L2([a, b],Ω). Then the stochastic process

Xt =

t

a

f(s, ω)dBs(ω), a≤t≤b, (2.5.2) is a local martingale with respect to the filtration {Ft:a≤t ≤b}.

The stochastic process (2.5.2) can be shown to have continuous paths. I refer to Theorem 5.5.5. in [4] for details.

2.6 The Itô Formula

Definition 2.3.7 is not very practical when evaluating stochastic integrals. The Itô for- mula is therefore an important tool to lighten this task. It can be regarded as corre- sponding to the chain rule in ordinary calculus.

I will present the general Itô formula as found in [8, 4]. For a more step-wise approach, I can recommend Section 7.1.-7.3. in [4].

To start off, I will use Lad(Ω, L1[a, b]) to denote the class of {Ft}-adapted stochastic processesf(t, ω)that satisfy ∫b

a|f(t, ω)|dt <∞ almost surely.

Definition 2.6.1 (Itô process in 1 dimension). Let Bt be 1-dimensional Brownian mo- tion on (Ω,F, P). A 1-dimensional Itô process is a stochastic process Xt on (Ω,F, P) of the form

Xt=Xa+

t

a

u(s, ω)ds+

t

a

v(s, ω)dBs, a≤t≤b (2.6.1) where u∈ Lad(Ω, L1[a, b]) and v ∈ L2([a, b],Ω).

A convenient shorthand for Equation (2.6.1) is

dXt=u(t, ω)dt+v(t, ω)dBt.

It should be stressed that this is just a symbolic way of writing (2.6.1), since Brownian paths are nowhere differentiable.

Next I turn to stating the general Itô formula in 1-dimension.

(21)

Theorem 2.6.2 (1-dimensional Itô formula). Let Xt be an Itô process on the form dXt =udt+vdBt.

Let g(t, x)∈C2([0,),R). Then

Yt=g(t, Xt) is again an Itô process, and

dYt = ∂g

∂t(t, Xt)dt+ ∂g

∂x(t, Xt)dx+1 2

2g

∂x2(t, Xt)(dXt)2, (2.6.2) where (dXt)2 = (dXt)·(dXt) is computed according to the rules

dt·dt=dt·dBt= 0, dBt·dBt =dt. (2.6.3) Proof. A sketch of a proof can be found on on page 46 in [8].

2.7 The Itô Representation Theorem

In this section I will give a result named the Itô representation theorem, which will be important for proving the Wiener-Itô chaos expansion Theorem 3.1.7. The following lemma is needed in order to prove the Itô representation theorem. The proof of the lemma can be found on page 50-51 in [8].

Lemma 2.7.1. The linear span of random variables of the type

exp {∫ T

0

h(t)dBt1 2

T 0

h2(t)dt }

, h∈L2([0, T]), (2.7.1) is dense in L2(FT, P).

Theorem 2.7.2 (The Itô representation theorem). Let F L2(FT, P). Then there exists a unique stochastic processf(t, ω)∈L2ad([0, T]×Ω) such that

F(ω) = E[F] +

T

0

f(t, ω)dB(t). (2.7.2)

Sketch of proof: The idea is to show that Equation (2.7.2) holds for F on the form (2.7.1) and then to use Lemma 2.7.1 to approximate arbitrary F L2(FT, P) with linear combinations of functions Fn on the form (2.7.1).

Let

Yt(ω) = exp {∫ t

0

h(t)dBt 1 2

t 0

h2(t)dt }

, 0≤t ≤T

and use Itô’s formula to show that (2.7.2) holds for F =YT. By using the Itô isometry, it can be shown that F = limn→∞Fn where the limit is taken in L2(FT, P).

Uniqueness also follows from the Itô isometry.

(22)

2.8 The Martingale Representation Theorem

The next theorem is useful in mathematical finance. The proof found in [8, Theorem 4.3.4] is based on the Itô representation theorem.

Theorem 2.8.1(The martingale representation theorem).LetB(t) = (B(t1), . . . , B(tn)) be n-dimensional Brownian motion. Suppose Mt is an Ft-martingale w.r.t. P and that Mt∈L2(P) for all t≥0. Then there exists a unique stochastic processg(s, ω)such that g ∈ L2([a, b],Ω) for all t≥0 and

Mt(ω) =E[M0] +

t

0

g(s, ω)dB(s) a.s. for all t≥0.

2.9 Existence and Uniqueness of Stochastic Differential Equations

The following theorem provides sufficient conditions for existence and uniqueness of stochastic differential equations.

Theorem 2.9.1. Let T > 0 and b(·,·) : [0, T]×Rn Rn, σ(·,·) : [0, T]×Rn×m be measurable functions satisfying

|b(t, x)|+|σ(t, x)|≤C(1 +|x|), x∈Rn, t [0, T] (2.9.1) for some constant C, and such that

|b(t, x)−b(t, y)|+|σ(t, x)−σ(t, y)|≤D|x−y|, x, y Rn, t∈[0, T] (2.9.2) for some constant D. Here |σ|2=∑

ij|2. Let Z be a random variable which is indepen- dent of the σ-algebra Fm generated by Bs(·), s0 such that

E[|Z|2]<∞. Then the stochastic differential equation

dXt=b(t, Xt)dt+σ(t, Xt)dBt, 0≤t≤T, X0 =Z (2.9.3) has a unique t-continuous solution Xt(ω) with the property that Xt(ω) is adapted to the filtration FtZ generated by Z and Bs(·), s≤t and

E

„∫ T

0

|Xt|2dt ȷ

<∞.

Proof. I refer to Theorem 5.2.1. in [8] for the proof.

As is pointed out in the remark following Theorem 5.2.1. in [8], the condition (2.9.1) is imposed to prevent the solution to explode. Condition (2.9.2) guarantees that a solution is unique.

(23)

2.10 Girsanov Theorem

When dealing with Brownian motion in the previous sections, it has been tacit that this is meant with respect to the probability measureP. It might be natural to ask the following two questions:

1. Given a stochastic process, is it possible to determine if it is a Brownian motion with respect to a certain probability measure?

2. If so, how does one obtain this specific probability measure?

The first question is answered by the following

Theorem 2.10.1 (The Lévy characterization of Brownian Motion). Let Xt be a real continuous stochastic process on a probability space (Ω,H, Q). Then the following 1.

and 2. are equivalent:

1. Xt is a Brownian motion with respect to Q and 2. a) Xt is a martingale with respect to Q and

b) Xt2−t is a martingale with respect to Q.

Proof. The proof can be found in Theorem 3.3.16. [3] or Theorem 8.4.2. [4].

In the following, let E = EP denote the expectation with respect to the probability measureP. The second question is answered by

Theorem 2.10.2 (The Girsanov theorem). Let Xt, t∈[0, T] be a an Itô process on the form

dXt =u(t, ω)dt+dBt. (2.10.1)

Assume

Mt:= exp ˆ

t

0

u(s, ω)dB(s)− 1 2

t

0

a2(s, ω)ds

˙

, 0≤t≤T (2.10.2) is a martingale with respect to {Ft} and P, i.e. the Novikov condition

E

„ exp

ˆ1 2

T 0

a2(s, ω)ds

˙ȷ

<∞, (2.10.3)

is satisfied. Define the measure Q on FT by

dQ(ω) :=MT(ω)dP(ω). (2.10.4)

Then Q is a probability measure on FT and Xt is a Brownian motion with respect to Q for 0≤t ≤T.

Proof. The proof of this theorem and of two other versions can be found in [8, Section 8.6.].

The Girsanov theorem is an important result in general probability theory and hence in stochastic analysis. It has also many applications as for example determining the most probable price process for a certain stock.

Referanser

RELATERTE DOKUMENTER

In section 3, we shall introduce the admissibility concept for solutions of the original system (1), and prove existence and uniqueness of a solution to the Riemann problem in

We prove an existence and uniqueness result for non-linear time-advanced backward stochastic partial differential equations with jumps (ABSPDEJs).. We then apply our results to study

Shallow water equation, integrable equation, hyperbolic equation, discontinuous solution, weak solution, entropy condition, existence, uniqueness.. The research

Using the method of stochastic characteristics, stochastic flows may be employed to prove uniqueness of solutions of stochastic transport equations under weak regularity hypotheses

In this paper we prove, for small Hurst parameters, the higher order differentiability of a stochastic flow associated with a stochastic differential equation driven by an

We prove an existence and uniqueness result for a general class of backward stochastic partial differential equations with jumps.. See [FØS], [NS], [S] and the

The dense gas atmospheric dispersion model SLAB predicts a higher initial chlorine concentration using the instantaneous or short duration pool option, compared to evaporation from

Zheng: Existence and uniqueness of solutions of an asymptotic equation arising from a variational wave equation with general data.. Zheng: Rarefactive solutions to a