• No results found

Optimal control with delayed information flow of systems driven by G-Brownian motion

N/A
N/A
Protected

Academic year: 2022

Share "Optimal control with delayed information flow of systems driven by G-Brownian motion"

Copied!
24
0
0

Laster.... (Se fulltekst nå)

Fulltekst

(1)

R E S E A R C H Open Access

Optimal control with delayed information flow of systems driven by G-Brownian motion

Francesca Biagini · Thilo Meyer-Brandis · Bernt Øksendal · Krzysztof Paczka

Received: 14 September 2017 / Accepted: 19 September 2018 /

© The Author(s). 2018Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Abstract In this paper, we study strongly robust optimal control problems under volatility uncertainty. In theG-framework, we adapt the stochastic maximum princi- ple to find necessary and sufficient conditions for the existence of a strongly robust optimal control.

Keywords G-Brownian motion·optimal control problem·stochastic maximum principle

Mathematics Subject Classification (2010): 60H99; 93E20

1 Introduction

One of the motivations for this paper is to study the problem of optimal consumption and optimal portfolio allocation in finance under model uncertainty. In particular, we focus on volatility uncertainty, i.e., a situation where the volatility affecting the asset price dynamics is unknown and we need to consider a family of different volatility processes instead of just one fixed process (and hence also a family of models related to them).

F. Biagini

Department of Mathematics, LMU Munich, Theresienstraße 39, 80333 Munich, Germany e-mail: biagini@math.lmu.de

F. Biagini ()·B. Øksendal·K. Paczka

Department of Mathematics, University of Oslo, P.O.Box 1053 Blindern, N-0316 Oslo, Norway e-mail: biagini@math.lmu.de; oksendal@math.uio.no; k.j.paczka@cma.uio.no

T. Meyer-Brandis

Department of Mathematics, University of Munich, Theresienstraße 39, 80333 Munich, Germany e-mail: meyerbr@math.lmu.de

(2)

Volatility uncertainty has been investigated in the literature by following two approaches, i.e., by introducing an abstract sublinear expectation space with a special process calledG-Brownian motion (see (Peng2007), (Peng2010)), or by capacity theory (see (Denis et al.2011)). In (Denis et al. 2011), it is proven that these two methods are strongly related. The link between these two approaches is the repre- sentation of the sublinear expectationEˆ associated with theG-Brownian motion as a supremum of ordinary expectations over a tight family of probability measuresP, whose elements are mutually singular:

E[.] =ˆ sup

P∈PEP[.], see (4) and Theorem1for more details.

In this paper, we work in aG-Brownian motion setting as in (Peng2007) and use the related stochastic calculus, including the Itˆo formula,G-SDEs, martingale rep- resentation andG-BSDEs, as developed in (Peng2007), (Peng2010), (Soner et al.

2011a), (Song 2011), (Soner et al. 2011b), (Peng et al. 2014), (Hu et al.2014c), (Hu et al.2014a). It is important for understanding the nature ofG-Brownian motion to note that its quadratic variation B is not deterministic, but it is absolutely continuous with the density taking value in a fixed set (for example,[σ2¯2] for d = 1). EachP ∈ P can then be seen as a model with a different scenario for the quadratic variation. That justifies whyG-Brownian motion is a good framework for investigating model uncertainty.

In aG-Brownian motion setting one considers the following stochastic optimal control problem: to find the controluˆ ∈Asuch that

J(uˆ)=sup

u∈A J(u), (1)

with

J(u): = ˆE[

T 0

f(t,Xu(t),u(t))dt+g(Xu(T))]

=sup

P∈P EP[ T 0

f(t,Xu(t),u(t))dt+g(Xu(T))] =:sup

P∈PJP(u), (2)

whereXuis a controlledG-SDE, see (8). This problem has been studied in (Matoussi et al.2013), (Hu et al.2014b). In (Hu et al.2014b), they show that the value function associated with such an optimal control problem satisfies the dynamic programming principle and is a viscosity solution of some HJB equation.1(Matoussi et al.2013) investigates the robust investment problem for geometricG-Brownian motion, where 2BSDEs (which are closely related toG-BSDEs) are used to find an optimal solution.

In both papers the optimal control is robust in a worst-case scenario sense.

It is interesting to note that in the simplest example of the optimal portfolio prob- lem, which is the Merton problem with the logarithmic utility, one can easily prove that there exists a portfolio which is optimal not only in the worst-case scenario, but also for all probability measuresP(with the optimality criterion JP). We call

1To be exact, the authors considered a more general problem of recursive utility.

(3)

thisa strongly robust control. This strongly robust control is thus optimal in a much more robust sense than the worst-case scenario optimality. The new strongly robust optimality uses the fact that probability measuresPare mutually singular. Informally speaking, one can therefore modify theP-optimal controluˆPoutside the support of a probability measurePwithout losing the P-optimality. As a consequence, if the family{ ˆuP}P∈Psatisfies some consistency conditions, under a suitable choice of the underlying filtration the controls can be aggregated into a unique controlu, which isˆ optimal under every probability measureP. See (Soner et al.2011b) for more details on aggregation.

In this paper, we study strongly robust optimal control problems. However, instead of checking the consistency condition for the family of controls and using the aggre- gation theory established in (Soner et al.2011b), we adapt the stochastic maximum principle to theG-framework to find necessary and sufficient conditions for the exis- tence of a strongly robust optimal control. We stress that this method has the clear advantage that we solve only oneG-BSDE to produce the strongly robust optimal control instead of considering the optimal control problem for all P ∈ P (which are usually not Markovian laws) and checking the consistency condition. Another advantage is that we work with the raw filtration instead of enlarging it.

In the recent paper (Hu and Ji2016), they also study a stochastic maximum princi- ple for stochastic recursive optimal control problems in theG-setting, but still using the worst-case approach. They use the Minimax Theorem to obtain the variational inequality under a reference probabilityP: the stochastic maximum principle holds then under such aP-a.s, which is the main difference with respect to our approach.

They prove that this stochastic maximum principle is also a sufficient condition under some convex assumptions, but our control problem is different from the one in (Hu and Ji2016) and considers delayed information.

The notion of the strongly robust optimal control also has a better financial inter- pretation than the standard robust optimality mentioned above. The main drawback of the classical robust optimal control is that it is a differential game from a mathe- matical point of view, where one player chooses the optimal controluˆand the other chooses the optimal volatility represented by the lawPˆ:

sup

u∈Asup

P∈PJP(u)=JPˆ(uˆ).

Therefore, the optimal pair(uˆ,P)ˆ has the Nash equilibrium interpretation2. The prob- lem is that in real life there is no reason to why we should assume that the worst case is true, as there are no players who try to maximize gains from choosingP.

However, this is not the only problem with the standard robust optimality. Since the optimal probability measurePˆ is mutually singular to any other measureQ∈P, we can modify the controluˆ outside the support ofPˆ without losing the (classical) robust optimality. Since, as we noted above, usually the true probability will be differ- ent thanPˆ, the classical robust optimal control may have little sense forQ. Moreover, in the standard robust optimality, the measurePˆ is chosen to be static and does not

2This is even more visible for minimization problems, where one has a saddle point.

(4)

change with time. As a result, not all available information is taken into consideration, as shown for the Merton problem with logarithmic utility in Section5.

The paper is structured in the following way. In Section 2, we give a quick overview of theG-framework. Section3is devoted to a sufficient maximum principle in the partial-information case. In Section 4, we investigate the necessary maxi- mum principle for the full-information case. In Section5, we give four examples, including the Merton problem with logarithmic utility, mentioned earlier and an LQ- problem. In Section6, we provide a counter-example and show that it is not possible to relax the crucial assumption of the sufficient maximum principle without losing the strongly robust sense of optimality.

2 Preliminaries

Letbe a given set andHbe a vector lattice of real functions defined on, i.e. a linear space containing 1 such thatXHimplies|X| ∈H. We will treat elements ofHas random variables.

Definition 1A sublinear expectationEis a functionalE:H→Rsatisfying the following properties

1. Monotonicity:If X,YHand XY , thenE[X] ≥E[Y]. 2. Constant preserving:For all c∈R, we haveE[c] =c.

3. Sub-additivity:For all X,YH, we haveE[X] −E[Y] ≤E[XY]. 4. Positive homogeneity:For all XH, we haveE[λX] =λE[X]for allλ≥0.

The triple(,H,E)is called a sublinear expectation space.

We will consider a spaceHof random variables having the following property: if XiH, i=1, . . .n, then

φ(X1, . . . ,Xn)H ∀φ∈Cb,Li p(Rn),

whereCb,Li p(Rn)is the space of all bounded Lipschitz continuous functions onRn. Definition 2An m-dimensional random vector Y = (Y1, . . . ,Ym)is said to be independent of an n-dimensional random vector X =(X1, . . . ,Xn)if for everyφCb,Li p(Rn×Rm)

E[φ(X,Y)] =E[E[φ(x,Y)]x=X].

Let X1and X2be n-dimensional random vectors defined on sublinear random spaces (1,H1,E1)and(2,H2,E2), respectively. We say that X1and X2are identically distributed and denote it by X1X2, if for eachφCb,Li p(Rn)one has

E1[φ(X1)] =E2[φ(X2)].

Definition 3A d-dimensional random vector X =(X1, . . . ,Xd)on a sublinear expectation space(,H,E)is said to be G-normally distributed if for each a,b≥0

(5)

and each YHsuch that XY and Y is independent of X, one has a X+bY

a2+b2X. The letter G denotes a function defined as

G(A):=1

2E[(A X,X)]:§d→R,

where§d is the space of all d×d symmetric matrices. We assume that G is non- degenerate, i.e., G(A)G(B)βtr[AB]for someβ >0.

It can be checked thatGmight be represented as G(A)= 1

2sup

γtr(γ γTA), (3)

whereis a non-empty bounded and closed subset ofRd×d.

Definition 4Let G: §d → Rbe a given monotonic and sublinear function. A stochastic process B=(Bt)t0on a sublinear expectation space(,H,E)is called a G-Brownian motion if it satisfies the following conditions:

1. B0=0;

2. BtHfor each t ≥0;

3. For each t,s ≥ 0 the increment Bt+sBt is independent of(Bt1, . . . ,Btn) for each n ∈ Nand0 ≤ t1 < . . . < tnt. Moreover,(Bt+sBt)s1/2 is G-normally distributed.

Definition 5Let = C0(R+,Rd), i.e., the space of allRd-valued continuous functions starting at 0. We equip this space with the uniform convergence on compact intervals topology and denote byB()the Borelσ-algebra of. Let

H=Li p():=

φ(ωt1, . . . , ωtn): ∀n∈N,t1, . . . ,tn∈[0,∞)andφCb,Li p

Rd×n . A G-expectationis a sublinear expectation on(,H)defined as follows: for XLi p()of the form

X =φ(ωt1ωt0, . . . , ωtnωtn1), 0≤t0<t1< . . . <tn, we set

E[ˆ X] :=E φ

ξ1

t1t0, . . . , ξn

tntn1 ,

whereξ1, . . . ξnare d-dimensional random variables on sublinear expectation space (,˜ H˜,E)such that for each i = 1, . . . ,n,ξi is G-normally distributed and inde- pendent of(ξ1, . . . , ξi1). We denote by LGp()the completion of Li p()under the normXp:= ˆE[|X|p]1/p, p≥1. Then it is easy to check thatEˆ is also a sublinear expectation on the space(,LGp()), LGp()is a Banach space, and the canonical process Bt(ω):=ωt is a G-Brownian motion.

Following (Peng2010) and (Denis et al.2011), we introduce the notation: for each t∈ [0,∞)

(6)

1. t := {w.∧t:ω},Ft :=B(t),

2. L0(): the space of allB()-measurable real functions, 3. L0(t): the space of allB(t)-measurable real functions, 4. Li p(t):=Li p()L0(t),LGp(t):=LGp()L0(t),

5. MG2(0,T)is the completion of the set of elementary processes of the form η(t)=

n1

i=1

ξi1[ti,ti+1)(s),

where 0≤t1<t2< . . . <tnT, n≥1 andξiLi p(ti). The completion is taken under the norm

η2M2

G(0,T):= ˆE T

0 |η(t)|2ds

.

Definition 6Let XLi p()have the representation X

Bt1,Bt2Bt1, . . . ,BtnBtn1

, φCb,Li p

Rd×n

, 0≤t1<. . . <tn <∞.

We define the conditional G-expectation underFtj as E[ˆ X|Ftj] :=ψ

Bt1,Bt2Bt1, . . . ,BtjBtj1 , where

ψ(x):= ˆE φ

x,Btj+1Btj, . . . ,BtnBtn1

.

Similarly to theG-expectation, the conditionalG-expectation might also be extended to the sublinear operatorE[.|ˆ Ft]:LGp()LGp(t)using the continuity argument.

For more properties of the conditional G-expectation, see (Peng2010).

G-(conditional) expectation plays a crucial role in the stochastic calculus forG- Brownian motion. In (Denis et al.2011), it was shown that the analysis of theG- expectation might be embedded in the theory of upper-expectations and capacities.

Theorem 1((Denis et al.2011), Theorem 52 and 54) Let(,˜ G,P0)be a proba- bility space carrying a standard d-dimensional Brownian motion W with respect to its natural filtrationG. Letbe a representation set defined as in (3) and denote by A0,∞the set of all-valuedG-adapted processes on an interval[0,∞). For each θA0,∞definePθ as the law of a stochastic integral.

0 θsd Ws on the canonical space=C0(R+,Rd). We introduce the sets

P1:=

Pθ:θA0,∞

, and P :=P1, (4)

where the closure is taken in the weak topology.P1is tight, soP is weakly compact.

Moreover, one has the representation E[ˆ X] = sup

P∈P1

EP[X] = sup

P∈PEP[X], for each XL1G(). (5)

(7)

For convenience, we always consider only a Brownian motion on the canonical spacewith the Wiener measureP0.

Similarly, an analogous representation holds for theG-conditional expectation.

Proposition 1 ((Soner et al. 2011a), Proposition 3.4)Let Q(t,P) :=

PQ:P=PonFt

, whereQ=PorP1. Then, for any XL1G()andP∈Q, Q=PorP1, one has

E[ˆ X|Ft] = ess sup

P∈Q(t,P)

PEP[X|Ft], P−a.s. (6)

We now introduce the Choquet capacity (see (Denis et al.2011)) related toP c(A):=sup

P∈P P(A), AB().

Definition 71. A set A is said to be polar, if c(A)= 0. LetN be a collection of all polar sets. A property is said to hold quasi-surely (abbreviated to q.s.) if it holds outside a polar set.

2. We say that a random variable Y is a version of X if X =Y q.s.

3. A random variable X is said to be quasi-continuous (q.c. in short), if for every ε >0there exists an open set O such that c(O) < εand X|Oc is continuous.

We have the following characterization of spaces LGp(). This characterization shows thatLGp()is a rather small space.

Theorem 2(Theorem 18 and 25 in (Denis et al.2011)) For each p≥1one has LGp()= {XL0(): X has a q.c. version and lim

n→∞

|X|p1{|X|>n}

=0}.

G-expectation turns out to be a good framework to develop stochastic calculus of the Itˆo type. We can also useG-SDEs and a version of the backward SDEs. As back- ward equations are a key tool to consider the maximum principle, we now give a short introduction toG-BSDEs and their properties (for simplicity in a one-dimensional case).

Fix two functions f,g : × [0,T] ×R×R→ RandξLGp(T), p >2.

We say that the triple

pG,qG,K

is a solution of theG-BSDE with drivers f,gand terminal conditionξif

d pG(t)=−f

t,pG(t),qG(t) dtg

t,pG(t),qG(t)

dB(t)+qG(t)dB(t)+d K(t), pG(T)=ξ,

(7) whereK is a non-increasing G-martingale starting at 0. In (Hu et al. 2014c), the existence and uniqueness of such aG-BSDE are proved under some Lipschitz and regularity conditions on the driver.

Furthermore, under anyP ∈ P1, the process pG is a supersolution of a classi- cal BSDE with driversf and gand terminal conditionξ on the probability space

(8)

(,F,P)(we will call such a BSDE aP-BSDE). Hence, by the comparison theorem for supersolutions and solutions, we get

pG(t)pP(t) P−a.s.,

wherepPis a solution of aP-BSDE. It can also be checked thatpGis minimal in the sense that

pG(t)= ess sup

Q∈P1(t,P)

PpQ(t) P−a.s.,

see (Soner et al.2011c) for this representation. From now on we drop the superscript Gin the notation forG-BSDEs whenever this doesn’t lead to confusion.

3 A sufficient maximum principle

LetB(t)be aG-Brownian motion with associated sublinear expectation operatorEˆ. We consider controlsutaking values in a closed convex setU ⊂R. LetX(t)=Xu(t) be a controlled process of the form

d X(t)=b(t,X(t),u(t))dt+μ(t,X(t),u(t))dBt+σ(t,X(t),u(t))d B(t);0≤tT, X(0)=x∈R.

(8) We assume that the coefficientsb, μ, σ are Lipschitz continuous w.r.t. the space variable uniformly in(t,u). Moreover, if the coefficients are not deterministic, they must belong to the spaceMG2(0,T)for each(x,u)∈R×U.

Let f : [0,T] ×R×U →Randg:R→Rbe two measurable functions such thatf isC1w.r.t. the second variable andgis a lower-bounded, differentiable function with quadratic growth such that there exists a constantC >0 and >0 s.t.

|g(x)|<C(1+ |x|)1+/21 .

We letAdenote the set of all admissible controls. Foruto be inAwe require thatu(t) is quasi-continuous for allt ∈ [0,T]and adapted to(F(t−δ)+)t≥δ, whereδ ≥0 is a given constant. This means that our controluhas access only to a delayed information flow. Moreover, we assume that for eachuAthe following integrability condition is satisfied

T

0

f(t,X(t),u(t))dt

<∞.

Then, for eachP∈P, the performance functional associated withuAis assumed to be of the form

JP(u)=EP T

0

f(t,X(t),u(t))dt+g(X(T))

. (9)

We study the following strongly robust optimal control problem: finduˆ∈Asuch that sup

u∈AJP(u)=JP(uˆ) ∀P∈P, (10)

(9)

where the setP is introduced in (4). To this end we define the Hamiltonian H(t,x,u,p,q)=f(t,x,u)+

b(t,x,u)+μ(t,x,u)dBt

dt

p+σ(t,x,u)dBt

dt q, (11) and the associatedG-BSDE with adjoint processes p(t),q(t),K(t)by

d p(t)= −∂H

∂x(t,X(t),u(t),p(t),q(t))dt+q(t)d B(t)+d K(t); 0≤tT, p(T)=g(X(T)).

(12) Note that the solution of such a G-BSDE exists thanks to the assumption on the functionsf andgand on the definition of the admissible control (see (Hu et al.2014c) for details).

Theorem 3Letuˆ ∈ Awith corresponding solutionXˆ(t),pˆ(t),qˆ(t),Kˆ(t)of (8) And (12) such thatKˆ ≡0. Assume that:

(x,u)H(t,x,u,pˆ(t),qˆ(t))andxg(x)are concave for all t q.s., (13) and

±

∂uH(t,Xˆ(t),u,pˆ(t),qˆ(t))|u= ˆu(t)|F(t−δ)+

=0 (14)

for all t q.s. Thenuˆ=u is a strongly robust optimal control for the problem (10).

ProofFor the sake of simplicity, in the following we adopt the concise notation f(t) := f(t,Xu(t),u(t)), fˆ(t) = f(t,Xuˆ(t),uˆ(t)), X(T) = Xu(T), Xˆ(T) = Xuˆ(T). LetuAbe arbitrary and consider

P∈Psup{JP(u)JP(ˆu)} = sup

P∈P EP T

0

f(t)− ˆf(t)

dt+g(X(T))g

Xˆ(T)

= ˆE T

0

f(t)− ˆf(t)

dt+g(X(T))g Xˆ(T)

= ˆE[I1+I2],

(15) whereJis introduced in (2) and

I1:= T 0

f(t)− ˆf(t)

dt, I2:=g(X(T))g Xˆ(T)

. By the definition ofH, we can write

I1= T 0

H(t)− ˆH(t)

b(t)− ˆb(t)+(μ(t)− ˆμ(t))dBt

dt ˆ

p(t)− [σ(t)− ˆσ (t)]dBt

dt qˆ(t)

dt.

(16)

By the concavity ofg, (12), and the Itˆo formula, we have

(10)

I2g

Xˆ(T) X(T)− ˆX(T)

= ˆp(T)

X(T)− ˆX(T)

= T

0 pˆ(t)d

X(t)− ˆX(t) +

T 0

X(t)− ˆX(t)

dpˆ(t)+ T

0

d ˆ

p,X− ˆX (t)

= T 0 pˆ(t)

b(t)− ˆb(t)+

μ(t)− ˆμ(t)dBt

dt

dt + T

0

X(t)− ˆX(t)∂Hˆ

∂x (t)

dt+ T 0

σ(t)− ˆσ (t)dBt dt qˆ(t)dt +

T 0 pˆ

σ(t)− ˆσ (t)

d B(t)+ T

0

X(t)− ˆX(t) qˆ(t)d B(t) .

(17) Adding (16) and (17) and using the concavity ofHwe get, by the sublinearity of the G-expectation and by (15), that

sup

P∈P{JP(u)JP(uˆ)} ≤ ˆE T

0

pˆ(t)

σ(t)− ˆσ (t) +

X(t)− ˆX(t) qˆ(t) d B(t)

+ ˆE T

0

H(t)− ˆH(t)∂Hˆ

∂x(t)

X(t)− ˆX(t) dt

≤ ˆE T

0

∂Hˆ

∂u(t)(u(t)− ˆu(t))dt

T

0

∂Hˆ

∂u(t)(u(t)− ˆu(t))

dt

T

0

∂Hˆ

∂u(t)(u(t)− ˆu(t))|F(t−δ)+

dt

T

0

∂Hˆ

∂u(t)|F(t−δ)+

(u(t)− ˆu(t))+ + ˆE

∂Hˆ

∂u(t)|F(t−δ)+

(u(t)− ˆu(t))

dt=0,

sinceu= ˆuis a critical point of the Hamiltonian. This proves thatuˆ:= ˆuis optimal.

Remark 1Note that ifδ =0, we can slightly relax the assumption in Eq. (14) by only requiring that

maxv∈U H(t,Xˆ(t), v,pˆ(t),qˆ(t))] =H(t,Xˆ(t),uˆ(t),pˆ(t),qˆ(t)).

(11)

4 A necessary maximum principle for the full-information case

A drawback of the previous result is that the concavity conditions are not satisfied in many applications. Therefore, it is of interest to have a maximum principle, which does not need this condition. Moreover, the requirement that the non-increasingG- martingaleKˆ disappears from the adjoint equation for the optimal controluˆis a very strong assumption, which, however, is crucial in the proof. In this section, we prove a result which doesn’t depend on the concavity of the Hamiltonian. Moreover, in the Merton problem we show that the necessary maximum principle might be obtained without the assumption on the processKˆ. We make the following assumptions.

A1. For allu, βAwithβ bounded, there existsδ >0 such that u+A for alla(−δ, δ).

A2. For allt,hsuch that 0 ≤ t < t+hT and all bounded random variables αL1G(t)3, the control

β(s):=α1[t,t+h](s) belongs toA.

Remark 2Note that given u, βAwithβ bounded, the derivative process Y(t):= d

daXu+αβ(t) exists, Y(0)=0, and

dY(t)= ∂b

∂x(t)Y(t)+∂b

∂u(t)β(t)

dt +

∂μ

∂x(t)Y(t)+∂μ

∂u(t)β(t)

dBt+ ∂σ

∂x(t)Y(t)+∂σ

∂u(t)β(t)

d B(t) . This follows by the smoothness (C1) assumptions on the coefficients b, μ, σ and the Itˆo formula.

Before we give the necessary maximum principle for this problem, we will state the following remark showing that it is sufficient to consider just a control which is optimal for allP∈P1.

Remark 3Note that ifuˆ ∈Ais a strongly robust optimal control, it is of course the optimal control for the following problem:

sup

u∈AJP(u)= JP(uˆ) ∀P∈P1. (18) However, we have also the opposite, thanks to the conditions on the set of admissible

3It is easy to see that for a fixedPPthe set of all bounded random variables from spaceL1G()is dense in the spaceLPp(t)under the norm(EP[|.|p])1/pfor anyp1.

(12)

controlsA. Namely, ifu satisfies (18), then we have that for any fixed uˆ ∈Aand any P∈P1

0≥EP T

0

f

t,Xu(t),u(t) dt

T 0

f

t,Xuˆ(t),uˆ(t) dt +g(Xu(T))g

Xuˆ(T) .

We use again the shortened notation fˆ(t) := f

t,Xuˆ(t),uˆ(t)

and f(t) :=

f(t,Xu(t),u(t))and conclude that

0≥ sup

P∈P1

EP T

0

f(t)dtT 0

fˆ(t)dt+g(Xu(T))g

Xuˆ(T) .

Note that due to the conditions on the admissible controls we know that the random variables: T

0 f(t)dt, T

0 fˆ(t)dt, g(Xu(T)), and g Xuˆ(T)

belong to L1G(), hence by the representation of the G-expectation, we have

0≥ sup

P∈PEP T

0

f(t)dtT 0

fˆ(t)dt+g(Xu(T))g

Xuˆ(T)

and that implies by Proposition1thatu is a strongly robust optimal control.ˆ Lemma 1Assume that A1, A2 hold and that u is an optimal control for theˆ performance functional

uJP(u)

for some probability measureP ∈ P1. Consider the adjoint equation as a BSDE under the probability measureP:

dpˆP(t)= −∂H

∂x

t,Xˆ(t),uˆ(t),pˆP(t),qˆP(t)

dt+ ˆqP(t)d B(t); 0≤tT, ˆ

pP(T)=g

Xˆ(T)

P−a.s.

(19)

Then

∂HˆP

∂u (t):=

∂uH

t,Xˆ(t),u,pˆP(t),qˆP(t)

|u= ˆu(t)=0.

(13)

ProofConsider d

daJP(u+aβ)= d daEP

T

0

f

t,Xu+aβ(t),u(t) dt+g

Xu+aβ(T)

=lim

a→0

1 aEP

T

0

f

t,Xu+aβ(t),u(t))dt+g(Xu+aβ(T)

EP

T

0

f(t,X(t),u(t))dt+g(X(T))

=lim

a→0EP

T

0

1 a

f(t,Xu+aβ(t),u(t))f(t,X(t),u(t)) dt+1

a

g(Xu+aβ(T))g(X(T))

=EP

T

0

f

x(t,X(t),u(t))Y(t)+f

u(t,X(t),u(t))β(t)

dt+g(X(T))Y(T)

.

(20)

By the Itˆo formula, EP

g(X(T))Y(T)

=EP[p(T)Y(T)]

=EP

T

0

pP(t)dY(t)+ T 0

Y(t)d pP(t)+ T 0

qP(t) ∂σ

x(t)Y(t)+∂σ

u(t)β(t)

dBt

EP

T

0

pP(t) b

x(t)Y(t)+b

u(t)β(t)

dt+ T

0

pP(t)

∂μ

x(t)Y(t)+∂μ

u(t)β(t)

dBt

+ T 0

Y(t)(−HˆP

x (t))dt+ T 0

qP(t)

Y(t)∂σ

x(t)+∂σ

u(t)β(t)

dBt

=EP

T

0

Y(t)

pP(t) b

x(t)+∂μ

x(t)dBt

dt

+qP(t)∂σ

x(t)dBt

dt HP

x (t)

dt

+

T

0 β(t)

pP(t) b

u(t)+∂μ

u(t)dBt

dt

+qP(t)∂σ

u(t)dBt

dt

dt

.

(21) Adding (20) and (21), we get

d

daJP(u+aβ)EP

T

0 β(t)HP

u (t)dt

.

Ifuˆis an optimal control, then the above gives 0= d

daJP(uˆ+aβ)EP

T

0 β(t)HˆP

u (t)dt

for all boundedβA. Applying this to bothβand−β, we conclude that EP

T

0 β(t)Hˆ

u(t)dt

=0.

By A2 together with the footnote about the denseness, we can then proceed to deduce that

HˆP

u (t)=0 Pa.s.

(14)

Using the lemma, we can easily get the following necessary maximum principle.

Theorem 4Assume that A1, A2 hold and that u is a strongly robust optimalˆ control for the performance functional

uJP(u)

for every probability measureP∈P1. Consider the adjoint equation as a G-BSDE:

dpˆG(t)= −∂H

∂x(t,Xˆ(t),uˆ(t),pˆG(t),qˆG(t))dt+ ˆqG(t)d B(t)+dKˆ(t); 0≤tT, ˆ

pG(T)=g(Xˆ(T)) q.s.

(22) IfKˆ ≡0q.s., then

∂HˆG

∂u (t):=

∂uH(t,Xˆ(t),u,pˆG(t),qˆG(t))|u= ˆu(t)=0, q.s. (23) ProofWe now prove that the relation in (23) holds for everyP∈P1. FixP∈P1. IfKˆ ≡0q.s., then by the uniqueness of the solution ofP-BSDE, we get that pˆG

ˆ

pPP−a.s.andqˆG ≡ ˆqPP−a.s. But by Lemma1, we know thatuˆ is aP−a.s. critical point ofHˆP(t)hence alsoHˆG(t). By the arbitrariness ofP∈P1, we get

∂uH(t,Xˆ(t),u,pˆG(t),qˆG(t))|u= ˆu(t)=0∀P∈P1.

We get the assertion of the theorem by stating a general fact that ifξ, ηL1G()and ξ =ηP−a.s.for allP∈P1, thenξ =ηq.s.

As we mentioned at the beginning of this section, the assumption on the process Kˆ is a big disadvantage. However, if we limit our considerations to Merton-type problems, we are able to show the necessary maximum principle without this assumption.

Theorem 5Assume that 1. A1, A2 hold,

2. b≡0,μ(t,x,u)=ψ(x)l(u)m(t)andσ(t,x,u)=ψ(x)h(u)s(t)forψ,l,hC1(R)and some bounded processes m and s such that for each t ∈ [0,T]m(t) and s(t)are quasi-continuous. Moreover, let c(s(t)=0)=0for all t∈ [0,T], 3. f ≡0,

4. X(0)=x=0.

Letu be a strongly robust optimal control for the performance functionalˆ uJP(u)

for every probability measureP ∈ P1. If c(l(uˆ(t))= 0)=0, c(h(uˆ(t))= 0)=0, c(h(uˆ(t)) = 0) = 0, c(ψ(Xˆ(t)) = 0) = 0, c(Xˆ(t)) = 0) = 0, and

(15)

c(l(uˆ(t))h(uˆ(t))l(ˆu(t))h(uˆ(t))=0)=04for all t∈ [0,T], then

∂HˆG

∂u (t):=

∂uH(t,Xˆ(t),u,pˆG(t),qˆG(t))=0, q.s. (24) ProofFix a probability measureP∈P1. By Lemma1, we know thatuˆis a critical point (P-a.s.) of the Hamiltonian

∂uH(t,Xˆ(t),uˆ,pˆP(t),qˆP(t))=0,t ∈ [0,T].

Using this fact, we get 0=

∂uH(t,Xˆ(t),uˆ,pˆP(t),qˆP(t))

=ψ(Xˆ(t))

l(uˆ(t))m(t)pˆP(t)+h(uˆ(t))s(t)qˆP(t) dB(t) dt . By the assumption on processsandh, we compute that

ˆ

qP(t)= −m(t) s(t)

l(uˆ(t)) h(ˆu(t))pˆP(t).

We have that

∂xH(t,Xˆ(t),uˆ(t),pˆP(t),qˆP(t))=ψ(Xˆ(t))

l(ˆu(t))m(t)pˆP(t) +h(uˆ(t))s(t)qˆP(t) dB(t)

dt

=ψ(Xˆ(t))pˆP(t)m(t) l(uˆ(t))

h(uˆ(t))l(ˆu(t)) h(uˆ(t))

dB(t) dt

=0

sincec(l(uˆ)h(ˆu)l(ˆu)h(ˆu)=0)=0 by hypothesis. But then we see that pˆPhas dynamics

dpˆP(t)= −

∂xH(t,Xˆ(t),uˆ(t),pˆP(t),qˆP(t))dt+ ˆqP(t)d B(t)

= −m(t) s(t)

l(uˆ(t))

h(ˆu(t))pˆP(t)d B(t).

Hence,

ˆ

pP(t)=EP[g(Xˆ(T))|Ft] P−a.s. We also remember that

ˆ

pG(t)= ess sup

Q∈P1(t,P)

PpˆQ(t)= ess sup

Q∈P1(t,P)

PEQ[g(Xˆ(T))|Ft] P−a.s.

4Note that this condition is satisfied ifl(x) = ah(x)for some constanta R. However, it may also include other cases, since it could be that the span ofuˆ(t),t∈ [0,T], is not necessarily the entireR.

(16)

Thus, by the characterization of the conditionalG-expectation in (6), we obtain that ˆ

pG(t)is aG-martingale with representation ˆ

pG(t)= ˆE[g(Xˆ(T))|Ft] = ˆE[g(Xˆ(T))] + t

0 qˆG(s)d B(s)+ ˆK(t) q.s. and consequently it has dynamics

dpˆG(t)= ˆqG(t)d B(t)+dKˆ(t).

But in that case we know that for almost allt ∈ [0,T], we must have that 0=

∂xH

t,Xˆ(t),uˆ(t),pˆG(t),qˆG(t)

=ψ

Xˆ(t) l(ˆu)m(t)pˆG(t)+h(ˆu)s(t)qˆG(t) dB(t) dt

(25)

q.s. By assumption onψ(Xˆ), we conclude that

l(uˆ)m(t)pˆG(t)+h(uˆ)s(t)qˆG(t)=0q.s. Hence,

ˆ

qG(t)= −m(t) s(t)

l(ˆu(t)) h(uˆ(t))pˆG(t),

sincec(h(uˆ(t))=0)=0 for allt∈ [0,T], and we can easily check then that

∂uH

t,Xˆ(t),uˆ,pˆG(t),qˆG(t)

=0.

5 Examples

We now consider some examples to illustrate the previous results. In the following, we assume to work with a one-dimensionalG-Brownian motion with operatorGof the form

G(a):= 1

2(a+σ2a), σ2>0, (26) i.e., with quadratic variationB(t)lying within the boundsσ2tandt.

5.1 Example I Consider

d X(t)=d B(t)c(t)dt, (27) wherec(t),t ∈ [0,T], is stochastic process such thatc(t)L1G(t)for allt ∈ [0,T]. We wish to solve the optimal control problem for every P ∈ P under the performance criterion

JP(c)=EP T

0

lnc(t)dt+X(T)

. (28)

Referanser

RELATERTE DOKUMENTER

In this work we address the optimizing control allocation problem for an over-actuated nonlinear time- varying system with actuator dynamic where parameters affine in the actuator

We bridge mathematical number theory with that of optimal control and show that a generalised Fibonacci sequence enters the control function of finite horizon dynamic

We bridge mathematical number theory with that of optimal control and show that a generalised Fibonacci sequence enters the control function of finite horizon dynamic

We consider optimal control problems for systems described by stochastic dif- ferential equations with delay.. We prove two (sufficient) maximum principles for certain classes of

Stochastic control of the stochastic partial differential equations (SPDEs) arizing from partial observation control has been studied by Mortensen [M], using a dynamic

[23] Vinter R.B., Kwong R.H., The infinite time quadratic control problem for linear systems with state and control delays: an evolution equation approach, SIAM Journal of Control

We show that under certain conditions this problem can be transformed into a sequence of iterated no-delay optimal stopping problems and there is an explicit relation between

We study a general optimal stopping problem for a strong Markov process in the case when there is a time lag δ &gt; 0 from the time τ when the decision to stop is taken (a