Science Journal of Applied Mathematics and Statistics
Volume 4, Issue 4, August 2016, Pages: 147-158

Quadratic Optimal Control of Fractional Stochastic Differential Equation with Application

Sameer Qasim Hasan, Gaeth Ali Salum

College of Education, Almustansryah University, Baghdad, Iraq

(S. Q. Hasan)
(G. A. Salum)

Sameer Qasim Hasan, Gaeth Ali Salum. Quadratic Optimal Control of Fractional Stochastic Differential Equation with Application. Science Journal of Applied Mathematics and Statistics. Vol. 4, No. 4, 2016, pp. 147-158. doi: 10.11648/j.sjams.20160404.15

Received: May 4, 2016; Accepted: June 3, 2016; Published: July 23, 2016

Abstract: The paper is devoted to the study of optimal control of Quadratic Optimal Control of Fractional stochastic differential Equation with application of Economy Mode with different types of fractional stochastic formula (ITO, Stratonovich), By using the Dynkin formula, Hamilton-Jacobi-Bellman (HJB) equation and the inverse HJB equation are derived. Application is given to a stochastic model in economics.

Keywords: Fractional Stochastic Differential Equations, Dynkine Formula, Hamilton-Jacobi-Bellman Equation

1. Introduction

In the following controlled Fractional stochastic differential equations was introduced

1. .

2. .

3. .

where x (t), , is a given continuous process, u (t) is a control process, H (t) be nn matrices, M (t) be nk matrices, b (t) be nm matrices, the control u(t) be vector, and are Fractional Brownian Motion and Brownian Motion respectively.

we presented Dynkin formula, This result can be obtained from Taylor formula for above Fractional stochastic differential equations and there generators, By using Dynkin formula and the property of expectation, the Hamilton-Jacobi-Bellman (HJB) equation and the inverse HJB equation have been stated. The stochastic optimal control for the stochastic differential delay equation was found in the paper [1], we will give the proof for Dynkin formula, the Hamilton-Jacobi-Belman (HJB) equation, the inverse HJB equation and the optimal control for each of the above equation. For a definitions related to optimal control see [2], a Ramsey model [4,6] that takes into account the randomness in the production cycle.

The models is described by the equations

1. = [H (t) + u (M (t)] + b (

2. dk (t) = [H (t) k (t) + M (t) u (k (t))] +b (k (t)) ○dB (t)

3. dk (t) = [H (t) k (t) + M (t) u (k (t))] +b (k (t)) ○d (t)

where k is the capital, M is the production, u is control process, H (t) be nn positive matrices. For these stochastic economic models the optimal control for the first and second economic equation is found to be u , and the optimal control for the third equation is found to be , and the optimal performance is

2. Definitions and Basic Concept

Definition (1), [3]

A random experiment is a process that has random out comes.

Definition (2), [3]

A sample space is the set of all possible outcomes of a random experiment and is denoted by Ω.

Definition (3), [3]

A -algebra Fof subset of a sample space Ω (which is the set of all possible outcomes) satisfies the following

i. .

ii. If , then  where  is the complement of all set A.

iii. For any sequence  Then  and  the element of Fare called measurable sets and the pair(Ω, F) is called a measurable space.

Definition(4), [3]

The probability p is a set function that p: F→ [0,1], and p is called a probability measure if the following conditions hold

i. P (Ω) =1.

ii. P () =1p (A).

iii. P () =), if =, for ij.

Definition (5), [3]

The triplet (Ω, F, p) consisting of the sample space Ω, the -algebra F of subset ofΩ and a probability measure p defined on F is called a probability space.

Definition (6), [3]

A random variable x, in the probability space (Ω, F, p) is a function x: Ω→R such that the inverse  (A) ={w Ω: x (w) A} F, for all open subset A of R.

Definition (7), [3]

A stochastic process x: [0, T]  Ω→ R, in probability space (Ω, F, p) is a function such that x (t,.) is a random variable in (Ω, F, p) for all t (0, T) we will often write x (t) x (t,.).

Definition (8), [3]

A stochastic process x = {x (t),t  [0, T] }is said to be Gaussian if for all n 1and all, , ……..,  [0, T], (, , ………, ) is Gaussian random vector. if the mean ofxequal to zero then xis said to be centered.

Definition (9), [3]

A stochastic process x (t), t  0, on a probability space (Ω, F, P) is adapted to the filtration ()  0 if for each t  0, x (t) is  measurable.

Definition (10), [8]

The ordinary Brownian motion or (winer process) is Gaussian process B={B (t), t  0} with zero mean and covarianceE (B (s) B (t)) = min {s, t}.

Definition (11), [5]

Let H be a constant belong to (0, 1). A one dimensional fractional Brownian motion = {  (t), t  0 } ofHurst index His a continuous and centered Gaussian process with zero mean and covariance function:

.

Definition (12), [9]

Let S be a linear space of smooth cylindrical V-valued random variable on (Ω, F, P) such that if FS then it has the form

(1)

Where ,

and  and all of its derivatives havepolynomial growth}. Where

(2)

Definition(13), [9]

The derivative D: S → is a linear operator which is given for F  S in equation (1) by

(3)

where

Definition (14), [2]

A measurable function f: ] is called supermeanvalued with respect to t) if)] for all stopping time T and all .

Remark(1), [2]

Let , ………,  are bounded Borel function on and T be a stopping time and  is -algebra, then

(4)

For all 0 ………, let g be the set of all real -measurable function for t0, we define the shift operator : g→g, if ƞ=), ),., ), where  is Borel measurable 0 then

ƞ =), ),., ), then it follows from (3) that

(5)

For any stopping time, the following property be satisfy

(6)

where  = inf{t>}

(7)

So f is supermeanvalued

3. Fractional Stochastic Differential Equation

Let are continuous functionaldefined on the metric space K, let the Fractional stochastic process x (t) satisfy the Fractional Stochastic Differential Equation

(8)

and  (t) is Brownian motion.

Let H (t) be nn matrices, M (t) be nk matrices, b (t) be nm matrices and thecontrol u (t) be k1 vector and  (t) is Brownian motion let

(9)

and

(10)

then from (8) and (9) the stochastic process x(t) in (7) satisfy the linear Fractional Stochastic Differential Equation

(11)

Remark (2), "The IT'O Fractional Taylor formula", [9]

Let x (t) be the stochastic process given as

(12)

Whereare continuous functional defined on the metric space K, the Hurst parameter H  (, 1) and V is separable Hilbert space, Let f: V  V be a twice continuously differentiable function such that f ': V  (V, V) and f '':  (V, V) where f ' and f '' are the first and second derivatives respectively for p, q  [0, t] and V, then the process f (x (t)) satisfies the IT'O Fractional Taylor formula defined by the ITO Formula

(13)

by taking the derivative of both saided one can get

(14)

by applying (7) on (13) to get

(15)

Definition (15) [5]

The generator  of anFractional Stochastic differential equation (7) defined by

(16)

Remark (3)

by substituting equation (15) in equation (16) eyelid that

(17)

3.1. Fractional Martingale Problem

If (7) is an ITO Fractional Stochastic Differential Equation with generator  and f (R) then the Fractional Martingale formula is

(18)

3.2. Dynkine Formula for the Linear Quadratic Regulator Problem

Let h (R), C (t) be the nn matrices and G (t) be the kk matrices, Note that from equation (11) and equation (17) we obtain the following Fractional Taylor formula for the function h (x (t)) where h (x (t)) defined as

(19)

(20)

Let T be a stopping time for the stochastic processt) defined in equation (12) such that E (h (x (t)) dt <, by taking the expectation of two sides, one can get the following Dynkin formula

(21)

3.3. The Quadratic Regulator Optimal Problem

Assume that the cost function of the fractional linear quadratic regulator function is

(22)

where all of the coefficients C (t) be the nn matrics, G (t) be the kk, the control u (t) be k1 vector, we assume that C (t) and Rare symmetric, non negative definite and G (t) is symmetric positive definite and T is the final time of the solution where x (t) defined in (3. 4) such that |T| <, the problem is to find the optimal control  (t) such that

h (x,  (t))=min{h (x, u)}.

4. Hamilton-Jacobi-Bellman Equation for Quadratic Regulator Problem Consider the Markova Control u(t) = u (x(t))

h= [H (t) x (t) +M (t) u(t)] +b (q) b (p)  (pq) dqdp(23)

Theorem (1) "HJB equation "

Define  (x) =min{ h (, u): u = u () -Markov control} (24)

Suppose that h (R) and the optimal controlexists Then

min{ (t) C (t) x (t) + (t) G (t) u (t) + (x) }= 0 (25)

where G(t) be the kk metrics, the control u (t) be k1 vector, and the generator is given by equation (22) and

(x) =R  (26)

The minim is a chivied when is optimal. In other words

(t) C (t) x (t) + (t) G (t)  (t) + (x) = 0 (27)

Proof

Now proceed to prove (7. 4), let =Tv be the first exit time of the solution x (t) by using (2. 5) and (2. 6)

[h (),u)] =  u (t)) dt +  R

= u (t)) dt +  (t) R / ]

= u (t)) dt (t) C (t) x (t) + (t) G (t) u (t)] dt]

[h (, u)] = h, u) ) dt], Thus

h (, u) =  dt] +  [h (, u)] (28)

h (, u) =  ds] +  [h (, u)]

by equation (21), we get

[h (, u)] = h () +dt

h (, u) =  ds] + h () + dt

Or 0  ds] +dt

At. Thus 0 { (t) C (t) x (t) + (t) G (t)  (t) +  (x) }

by equation (6) we have

0  (t) C (t) x (t) + (t) G (t)  (t) +  (x)

Theorem (2). (covers of the HJB equation)

let  (x) be a bounded function in C C (CL (G)), Suppouse that for all u  Y where Y is the set of controlethe inequality

(t) C (t) x (t) + (t) G (t) u (t) +  0

then )  h (, u), for all u  Y, moreover

(t) C (t) x (t) + (t) G (t)  (t) +  (x) = 0, Then

is an optimal controle

Proof

Let u be a Markov control, and let u be a Markova control then

(t) C (t) x (t) + (t) G (t) u (t)] for u  Y

by equation (21)

] = h () +dth () dt

Thus

h () dt] =h (, u), then

is an optimal controle.

5. Application 1 [Economics Model and It's Optimization [Fractional Stochastic Differential Equation]]

In 1928 F. R Ramsy introduced an economics model describing the rate of change of capital K and labor L in a market by a system of ordinary differential equation with P and C being the production and consumption rates  respectively the model is given by

= p (t) C (t), = a (t) L (t)  (29)

Where a(t) is the rate of growth Labor.

The production, capital and labor are related by the CobbDouglas formula.

p (t) = A

where A,  are some positive constant.

in certain the dependence of P on K and L is linear these mean = 1which will be our assumption throughout this section we shall also assume that the labor is constant, L (t) = ; which is true for certain markets or relatively short time intervals of several years.

Therefore the production rate and the capital are related by p (t) = H (t)  [1]

Another important assumption we make is that the production rate is subject to small random disturbances i.e p(t) = H (t) + b (. therefore

= H (t) + b ( C (t)

Where M (t) =  C (t)

Which can be rewritten in the differential form as

= [H (t) + M (t)] + b (

Where  is fractional Brownian motion b (is real function, characteristic of the noise.

Assume that M (t) can be controlled

= [H (t) + u (M (t)] + b ( (30)

Usually one wants to minimize the cost function let us choose the following cost function

h (, u) = E (R +  u (t)) dt)

The operator on) = R

= [ (R ] (H (t) t) +M (t) u (t)) +  (R b (q) b (p)  (p  q) ds.

Since

(t)) a (t) = [ ( R ] (H (t) t) +M (t) u (t))

(P)  = 0

=  ( R b (q) b (p)  (p  q) dt.

by Theorem (1)

min { u (t) + 2 R H (t) t) + 2 R M (t) u (t) + b (q) b (p)  (p  q) dt = 0

by taking the derivative of two sides, one can get,

{ u (t) + 2 R H (t) t) + 2 R M (t) u (t) + b (q) b (p)  (p  q) dt = 0

2  u (t) +2 RM (t) = 0

Is optimal control for the linear-quadratic fractional Brownian motion differential equation and the optimal cost function is

h (x, u) = E ( R + ) dt)

6. Stratonovich Stochastic Differential Equation

Letare continuous functionaldefined on the metric space K, let the stochastic process x (t) satisfy the Stratonovich Stochastic Differential Equation

dx (t) = (x (t)) dt+b (x (t)) ○dB (t)  (31)

where

(x (t)) =a (x (t))  b (x (t)) ,  (32)

b (x (t)) ○dB (t) = b (x (t)) dB (t) + b (x (t)) , (33)

and B (t) is Brownian motion.

Let H (t) be nn matrices, M (t) be nk matrices, b (t) be nm matrices and thecontrolu (t) be k1 vector, let = H (t) + M (t) u (t) and = b (t), then (6. 2) become

(x (t)) = H (t) + M (t) u (t)  b (t)  (34)

And equation (30) become b (t) ○dB (t) = b (t) dB (t) + b (t)  (35)

then from (31) and (32) the stochastic process x (t) in (28) satisfy the linear Stratonovich Stochastic Differential Equation

dx (t) =+ b (t) ○dB (t)  (36)

Remark (4), "The ITO Stratonovich Taylor Formula"

Let the stochastic process x (t) defined as

x (t) = x (0) +  + (37)

where  (x (t)), b (t) ○dB (t) are defined in equation (31) and equation (32) respectively, andare continuous functionaldefined on the metric space K, then x (t) satisfy the ITO Stratonovich Taylor Formula for f: RR

f (x (t)) =f (x (0)) + + (38)

by applying (6. 4) and (6. 5) 0n (6. 8) to get the ITO Formula

f (x (t)) =f (x (0)) + (H (t) x (t) +M (t) u (t)) b (t) ] dt+ dB (t) + b (t) ] dt (39)

by taking the derivative of two sides, one can get,

df (x (t)) = [(H (t) x (t) +M (t) u (t)) b (t) ] dt+ [b (t) dB (t) +  b (t) ] dt (40)

Remark (5)

By definition (15) The generator of anStratonovich Stochastic different equation is

(= (H (t) x (t) +M (t) u (t)) b (t)  + b (t)  (41)

6.1. The Martingle Problem

If (30) is an Stratonovich Stochastic Differential Equation with generator  and f (R) then

f (x (t)) =f (x (0)) + dt+(42)

6.2. Dynkin Formula for Fractional Stochastic Linear Quadratic Regulator Problem with Stratonovich Formula

Let h (R), C (t) be the nn matrices and G (t) be the kk matrices, Note that from (34), we obtain the following Stratonovich formula for the function h (x (t)) where h (x (t)) defined as

h (x (t)) =C (t) x (t) + G (t)  (43)

h (x (t)) = h (x (0)) + (H (t) x (t) +M (t) u (t)) b (t) ] dt+ dB (t) + b (t) ] dt (44)

Let T be a stopping time for the stochastic processt) such that

E (h (x (t)) dt <, by taking the expectation of two sides, one can get the following Dynkin formula

E (h (x (T))) =h (x (0)) +E [ (H (t) x (t) +M (t) u (t)) b (t) +b (t) ]

E (h (x (T))) = h (x (0)) + E [h (x (t)) dt] (45)

6.3. The Fractional Stochastic Quadratic Regulator Optimal Problem

Assume that the cost linear quadratic regulator function is

h (x, u) = E ( R +  u (t)) dt)  (46)

where all of the coefficients C (t) be the nn matrics, G (t) be the kk, the control u (t) be k1 vector, we assume thatC (t) and Rare symmetric, non negative definite and G (t) is symmetric positive definite and T is the final time of the solution where x (t) defined in (25) such that |T|<, the problem is to find the optimal control  (t) such that

h (x,  (t))=min{h (x, u)} (47)

6.4. Hamilton-Jacobi-Bellman Equation for Quadratic Regulator Problem

Let the optimal control  (t) Y where Y is the set of control then the generator in equation (35) become

(= (H (t) x (t) +M (t)  (t))  b (t)  + b (t)    (48)

Theorem (3) "HJB equation"

Define (x) =min{ h (, u): u = u () -Markov control} (49)

Suppose that h (R) and the optimal controlexists Then

min{ (t) C (t) x (t) + (t) G (t) u (t) + (x) }= 0 (50)

where G(t) be the kk metrics, the control u (t) be k1 vector, and the generator is given in equation (42) and

(x) =R     (51)

The minim is a chivied whenis optimal. In other words

(t) C (t) x (t) + (t) G (t)  (t) + (x) = 0  (52)

Proof

Now proceed to prove (43), let =Tv be the first exit time of the solution x (t) by using (4) and (5)

[h (),u)] =  u (t)) dt +  R

= u (t)) dt +  (t) R / ]

= u (t)) dt (t) C (t) x (t) + (t) G (t) u (t)] dt]

[h (, u)] = h, u) ) dt], Thus

h (, u) =  dt] +  [h (, u)] (53)

h (, u) =  ds] +  [h (, u)]

by equation (37)

[h (, u)] = h () + dt

h (, u) =  + dt] +h () +dt

Or 0  ds] +dt

At. Thus 0 { (t) C (t) x (t) + (t) G (t)  (t) +  (x) }

by (37) we have

0  (t) C (t) x (t) + (t) G (t)  (t) +  (x)

Theorem (4). (convers of the HJB_ equation)

let  (x) be a bounded function in  C (CL (G)), Suppose that for all u  Y where Y is the set of control the inequality

(t) C (t) x (t) + (t) G (t) u (t) +  0

then )  h (, u), for all u  Y, moreover

(t) C (t) x (t) + (t) G (t)  (t) +  (x) = 0, Then

is an optimal controle

Proof

Let u be a Markov control, and let u be a Markova control then

(t) C (t) x (t) + (t) G (t) u (t) for u  Y

by equation (37)

] = h () +dt

h () dt

Thus

h () dt] =h (, u)

therefore

is an optimal controle.

7. Application 2 [Economics Model with Brownian Stronovich Differential Equation]

In 1928 F. R Ramsy introduced an economics model describing the rate of change of capital K and labor L in a market by a system of ordinary differential equation with P and C being the production and consumption rates  respectively the model is given by

= p (t) C (t), = a (t) L (t)  (54)

Where a(t) is the rate of growth Labor.

The production, capital and labor are related by the CobbDouglas formula.

p (t) = A

where A,  are some positive constant. in certain the dependence of P on K and L is linear these mean = 1which will be our assumption throughout this section we shall also assume that the labor is constant, L (t) = ; which is true for certain markets or relatively short time intervals of several years.

Therefore the production rate and the capital are related by

p (t) = H (t)  [1]

A nether important assumption we make is that the production rate is subject to small random disturbances i.e p(t) = H (t) k (t) + b (k (t)) ○dB (t). therefore

= H (t) k (t) + b (k (t)) ○dB (t) C (t)

Where M(t) =  C (t)

Which can be rewritten in the differential form as

dk (t) = [H (t) k (t) + M (t)] +b (k (t)) ○dB (t) (55)

Where B (t) is Brownian motion b (k (t)) is real function, characteristic of the noise, Assume that M (t) can be controlled then equation (55) become

dk (t) = [H (t) k (t) + M (t) u (k (t))] +b (k (t)) ○dB (t) (56)

usually one wants to minimize the cost function (38) let  (x) = (T) R x (T), and let  (x (T)) D () and then() is

() x=(H (t) x (t) +M (t) u (t))  b (k (t))  +b (k (t))

() = H (t) x(t) 2Rx(t) +M(t) u(t) 2Rx(t)  b(k (t))  R x (T) +b(k (t)) R x(T)

Then equation (43) become (x) + () =0

(t) C (t) x (t) + (t) G (t) u (t) +H (t) x (t) 2Rx (t) + M (t) u (t) 2Rx (t)  b (k (t))  R x (T) +b (k (t)) R x (T)  = 0

by taking the derivative of two sides, one can get,

(t) C (t) x (t) + (t) G (t) u (t) +H (t) x (t) 2Rx (T) + M (t) u (t) 2Rx (T)  b (k (t))  R x (T) +b (k (t)) R x (T) ] = 0

2u (t) G (t) + M (t) 2Rx (t) = 0

u (t) =

is an optimal control for stratonovich stochastic linear quadratic differential equation and the optimal cost function is

h (x, u) = E ( R + ) dt)

8. Fractional Stratonovich Stochastic Differential Equation

Let are continuous functionaldefined on the metric space K, let the Fractional stochastic process x (t) satisfy the Fractional Stratonovich Stochastic Differential Equation

dx (t) = (x (t)) dt+b (x (t)) ○d (t)  (57)

where  (x (t)) =a (x (t))  b (x (t)) , (58)

b (x (t)) ○ d (t) = b (x (t)) d (t) + b (x (t)) , (59)

and  (t) Fractional is Brownian motion.

Let H (t) be nn matrices, M (t) be nk matrices, b (t) be nm matrices and thecontrol u (t) be k1 vector, let = H(t) + M(t) u(t) and = b(t), then (34) become

(x (t)) = H (t) + M(t) u(t)  b (t)  (60)

and equation (35) become

b (t) ○ d (t) = b (t) d (t) + b (t)  (61)

then from equation (60) and equation (61) the Fractional stochastic process x (t) in equation (57) satisfy the Fractional linear Stratonovich Stochastic Differential Equation

dx(t) = + b (t) ○ d(t) (62)

Remark (6), "The ITO Fractional Stratonovich Taylor Formula"

Let the stochastic process x (t) defined as

x (t) = x (0) +  + (63)

where  (x (t)), b (t) ○ d (t) are defined in equation (58) and equation (59) respectively, andare continuous functionaldefined on the metric space K, then x (t) satisfy the ITO Fractional Stratonovich Taylor Formula for f: RR

f (x (t)) =f (x (0)) + + (x (t)) dtb (p)  (pq) dqdp

+b (q) b (p)  (pq) dpdq +b (t) + (b (t)) b (p) (pq) dqdp (64)

by applying equation (58) and equation (59) on equation (64) to get the ITO Formula

f(x(t))=f(x(0))+ (H(t)x(t)+M(t)u(t))b(t)]dt+ [H(t)x(t)+M(t)u(t) b(t)]dtb(p) (pq)dqdp+b(q)b(p)

(pq)dpdt+b(t)+b(t)]dt+ (b(t))b(p) (pq)+ (b(t))( (b(t))]dqdqdt (65)

By taking the derivative of two sides, one can get,

df(x(t))=[ (H(t)x(t)+M(t)u(t)) b(t)]dt+ [H(t)x(t)+M(t)u(t) b(t)]dtb(p) (pq)dqdt+b(q)b(p) (pq)dpdt+b(t)dt+b(t)dt+ (b(t))b(p)

(pq)+ (b(t))( (b(t))]dqdt  (66)

Remark (7)

by using substitution equation (66) in equation (16) one get that

(= (H(t)x(t)+M(t)u(t)) b(t)+ [H(t)x(t)+M(t)u(t) b(t)]dtb(p)

(pq)dq+b(q)b(p) (pq)dp+b(t)+ (b(t))( (b(t))]dq (67)

Let h (R), C (t) be the nn matrices and G (t) be the kk matrices, Note that from equation (33) and equation (34) we obtain the following Stratonovich formula for the function h (x (t)) where h (x (t)) defined as

h (x (t)) =C (t) x (t) + G (t) (68)

h (x (t)) =h (x (0)) + (2C (t) x (t) H (t) x (t) +2C (t) x (t) M (t) u (t)) dt+ C (t) b (q) b (p)  (pq) dqdt+C (t) x (t) b (t) d (69)

Then equation (35) become

(= 2C (t) x (t) H (t) x (t) +2C (t) x (t) M (t) u (t) +C (t) b (q) b (p)  (pq) dq (70)

8.1. The Fractional Stratonovich Martingle Problem

If (62) is an ITO Fractional Stratonovich Stochastic Differential Equation withgenerator  and f (R) then

f(x(t))=f(x(0))+fdt+b(t)+b(t)]dt+ (b(t))b(p) (pq)+ (b(t))( (b(t))]dqdqdt  (71)

8.2. Dynkin Formula for the Fractional Linear Stratonovich Quadratic Regulator Problem

Let h (R), C (t) be the nn matrices and G (t) be the kk matrices, Note that from equation (33) and equation (34) we obtain the following Stratonovich formula for the function h (x (t)) where h (x (t)) defined as

h (x (t)) =C (t) x (t) + G (t)  (72)

h (x (t)) =h (x (0)) + (2C (t) x (t) H (t) x (t) +2C (t) x (t) M (t) u (t)] dt+ C (t) b (q) b (p)  (pq) dqdt+C (t) x (t) b (t) d (73)

Let T be a stopping time for the stochastic processt) such that

E (h (x (t)) dt <, by taking the expectation of two sides, one can get the following Dynkin formula

E(h(x(T)))=h(x(0))+E[2C(t)x(t)H(t)x(t)+2C(t)x(t)M(t)u(t)+C(t)b(q)b(p) (pq)]dqdtE(h(x(T)))=h(x(0))+E[h(x(t))dt] (74)

8.3. The Fractional Stochastic Quadratic Regulator Optimal Problem with Stratonovich

We assume that the cost linear quadratic regulator function is

h (x, u) = E ( R +  u (t)) dt)  (75)

where all of the coefficients C (t) be the nn matrics, G (t) be the kk, the control u (t) be k1 vector, we assume thatC (t) and Rare symmetric, non negative definite and G (t) is symmetric positive definite and T is the final time of the solution where x (t) defined in(12. 7) such that |T| <, the problem is to find the optimal control  (t) such that

h (x,  (t))=min{h (x, u)} (76)

9. Hamilton-Jacobi-Bellman Equation for Fractional Stochastic Quadratic Regulator Problem

Let the optimal control  (t) Y where Y is the set of control then the generator in equation (16) become

(= 2C (t) x (t) H (t) x (t) +2C (t) x (t) M (t)  (t) +C (t) b (q) b (p)  (pq) dq (77)

Theorem (5) "HJB equation "

Define (x) =min{ h (, u): u = u () -Markov control} (78)

Suppose thath  (R) and the optimal controlexists Then

min{ (t) C (t) x (t) + (t) G (t) u (t) + (x) }= 0 (79)

where G(t) be the kk metrics, the control u (t) be k1 vector, and the generator is given by equation (77) and

(x) =R  (80)

The minim is a chivied whenis optimal. In other words

(t) C (t) x (t) + (t) G (t)  (t) + (x) = 0  (81)

Proof

Now proceed to prove equation (81), let =Tv be the first exit time of the solution x (t) by using (4) and equation (5)

[h (),u)] =  u (t)) dt +  R

= u (t)) dt +  (t) R / ]

= u (t)) dt (t) C (t) x (t) + (t) G (t) u (t)] dt]

[h (, u)] = h, u) ) dt], Thus

h (, u) =  dt] +  [h (, u)]  (82)

h (, u) =  ds] +  [h (, u)]

By equation (74)

[h (, u)] = h () + dt

h (, u) =  ds] + h () +dt

Or 0  ds] +dt

At . Thus 0 { (t) C (t) x (t) + (t) G (t)  (t) +  (x) }

by equation (7) we have

0  (t) C (t) x (t) + (t) G (t)  (t) +  (x)

Theorem (6). (convers of the HJB_ equation)

let  (x) be a bounded function in  C (CL (G)), Suppose that for all u  Y where Y is the set of control the inequality

(t) C (t) x (t) + (t) G (t) u (t) +  0

then )  h (, u), for all u  Y, moreover

(t) C (t) x (t) + (t) G (t)  (t) +  (x) = 0, Then

is an optimal controle

Proof

Let u be a Markov control, and let u bea Markova control then

(t) C (t) x (t) + (t) G (t) u (t) for u  Y

By equation (74)

] = h () +dt h () dt

Thus

h () dt] =h (, u)

therefore

is an optimal controle.

10. Application 3 [Economics Model with Fractional Stratonovich Differential Equation]

In 1928 F. R Ramsy introduced an economics model describing the rate of change of capital K and labor L in a market by a system of ordinary differential equation with P and C being the production and consumption rates - respectively the model is given by

= p (t) C (t), = a (t) L (t)  (83)

Where a(t) is the rate of growth Labor.

The production, capital and labor are related by the CobbDouglas formula.

p (t) = A

where A,  are some positive constant.

in certain the dependence of P on K and L is linear these mean = 1which will be our assumption throughout this section we shall also assume that the labor is constant, L (t) = ; which is true for certain markets or relatively short time intervals of several years.

Therefore the production rate and the capital are related by

p (t) = H (t) k (t), [1]

A nether important assumption we make is that the production rate is subject to small random disturbances i.ep (t) = H (t) k (t) + b (k (t)) ○dB (t). therefore

= H (t) k (t) + b (k (t)) ○d (t) C (t)

Where M(t) =  C (t)

Which can be rewritten in the differential form as: -

dk (t) = [H (t) k (t) + M (t)] +b (k (t)) ○d (t)  (84)

Where  (t) is Fractional Brownian motion b (k (t)) is real function, characteristic of the noise, Assume that M (t) can be controlled the equation (84) become

dk (t) = [H (t) k (t) + M (t) u (k (t))] +b (k (t)) ○d (t)  (85)

usually one wants to minimize the cost function (75) let g (x (T)) = (T) R x (T), and let  (x) D () and from definition (15) then (16) become

() x= 2Rx (T) H (t) x (t) +2Rx (T) M (t) u (t) +Rb (q) b (p)  (pq) dq (86)

Then (81) become

(x) + () =0  (87)

(t) C (t) x (t) + (t) G (t) u (t) +2Rx (T) H (t) x (t) +2Rx (T) M (t) u (t) +Rb (q) b (p)  (pq) dq = 0

by taking the derivative of two sides, one can get,

(t) C (t) x (t) + (t) G (t) u (t) +2Rx (T) H (t) x (t) +2Rx (T) M (t) u (t) +Rb (q) b (p)  (pq) dq] = 0

2G (t) u (t) +2Rx (T) M (t) = 0

u (t) =

Is optimal control for the linear-quadratic fractional Brownian motion differential equation and the optimal cost function is

h (x, u) = E ( R + ) dt)

References

1. A. F. Ivanov and A. V. Swishchuk, optimal control of stochastic differential delay equation. prepriat, December 2003, 6pp (Applied Math. Letters, sub-mitted).
2. B. ksendal, Stochastic Differential Equations. May 2000, Springer-verlag Berlin Heidelberg New York.
3. E. Allen, Modeling with ITO Stochastic Differential Equation. 2007-springer.
4. F. P. Ramsey, "Amathemutical theory of savings". Economic J. 388 (1928), 543-549.
5. Ganig., Heyde C. C., Jagers p. and Kurtz T. G., "probability and its Application", Springer-verlag London Limited, 2008.
6. G. Gandolfo, "Economic Dynamics", springer-verlag, 1996.
7. Javier R. Movellan, "Tutorial On Stochastic Differential Equation", 2011.
8. K. E. Peter, Numerical solution of Stochastic Differential Equation. 1990 (springer-verlag Berlin.
9. Nualart D., "Fractional Brounian Motion: stochastic calculus and Applications", proceeding Mathemati
10. T. E. Duncan and B. pasik-Duncan, An approach to stochastic Integration for Fractional Brownian Motion in a Hilbert space.

 Contents 1. 2. 3. 3.1. 3.2. 3.3. 4. 5. 6. 6.1. 6.2. 6.3. 6.4. 7. 8. 8.1. 8.2. 8.3. 9. 10.
Article Tools