Physical and Chemical Characteristics of Condensable Particulate


Physical and Chemical Characteristics of Condensable Particulate...

0 downloads 95 Views 1MB Size

2017 IEEE 3rd International Conference on Control Science and Systems Engineering

Initial-data-parameterized Linear Quadratic Stochastic Optimal Control Problems with Random Jumps Xueqin Li

Tianmin Huang

School of Electrical Engineering Southwest Jiaotong University Chengdu, Sichuan, 610000 e-mail: [email protected]

School of Mathematics Southwest Jiaotong University Chengdu, Sichuan, 610000 e-mail: [email protected]

Chao Tang College of Mathematics Sichuan University Chengdu, Sichuan, 610000 e-mail: [email protected] optimal control for linear quadratic stochastic optimal control problem with random jumps by the existence and uniqueness of the solution to forward-backward stochastic differential equations with Brownian motion and poisson process(FBSDEP in short) in an arbitrary fixed time duration. These optimal control problems were applied to engineering and financial market partially. In this paper, we shall demonstrate how to use(the optimal control process of the stochastic LQ problem and)the solutions of the stochastic Hamilton system to construct a solution of the Riccati equation, which in general is a highly nonlinear BSDE. The rest of our paper is organized as follows. In section 2, we give the explicit form of the optimal control for the initial-data-parameterized stochastic linear quadratic stochastic optimal control problems with random jumps and prove the optimal control is unique using a classical method. We also describe the stochastic Hamilton system theory associate with above LQ problem and prove the existence and uniqueness results of this Hamilton system. In section 3, we introduce the connections between the Riccati equation and the associated stochastic Hamilton system, then by using the solution of this Riccati equation system, we give the linear feedback regulator to the initialdata-parameterized LQ optimal control problem with random jumps.

Abstract—A stochastic control problem is formulated and we get the explicit form of the optimal control for initial-dataparameterized linear quadratic stochastic optimal control problems with random jumps. The optimal control can be proved to be unique. A stochastic Riccati equation is rigorous derived from the stochastic Hamilton system, which provides an optimal feedback control. This completes the the interrelationship between the stochastic Riccati equation and stochastic Hamilton system as two different but equivalent tools for the stochastic linear quadratic problem. Keywords-LQ stochastic optimal control problems; random jumps; hamilton system; riccati equation.

I.

INTRODUCTION

A stochastic linear-quadratic (LQ in short) control problem is the optimal control of a linear stochastic dynamic equation subject to an expected quadratic cost functional of the system state and control, it is the most important examples of stochastic optimal control problems, especially due to their nice structures in engineering design. To solve stochastic LQ problem, the forward-backward stochastic differential equation (FBSDE in short) and Riccati equation are two different but equivalent tools. Nonlinear backward stochastic differential equation (BSDE in short) was introduced by Pardoux and Peng [1] firstly. Stochastic LQ optimal control problems have been studied in [2]-[7], Tang [8] obtained one existence and uniqueness result of Riccati equation with random coefficients, for more details, we refer to Tang [9] and the references therein. The optimal control problem with random jumps was first considered by Boel and Varaiya [10] and by Rishel [11], also in this case, the control is disturbed by Brownian motion and random jumps, the solution is a discontinuous stochastic process. The backward stochastic differential equations with Poisson process (BSDEP in short) were firstly discussed by Tang and Li [12], Shi and Wu [13], [14] obtained a explicit form of the

978-1-5386-0484-7/17/$31.00 ©2017 IEEE

II. Let

LQ STOCHASTIC OPTIMAL CONTROL PROBLEMS AND STOCHASTIC HAMILTON SYSTEM

W

be a

{ t ,0 d t d T } -stopping time such that

p  d W d 7 . Consider the initial-data-parameterized 2 stochastic LQ problem: minimize over u  (0, T ; m ) the quadratic cost functional

J (u;W , h) : =

T

³W (¢ R x

W , h ;u

s s

W , hh;;u T

 ¢Qx ¢Q Qxx

10

, xWs ,h;u ²  ¢ N sus , us ²)ds

W , h ;u T

,x

²,

(1)

W KX

­ ° ° °° ® ° ° ° °¯

where [ is the solution of the following linear stochastic control system with random jumps: ­ ° ° ® ° ° ¯

dxt = ( At xt  Bt ut )dt  6id=1 (Cti xt  Dti ut )dWt i

(2)

 ³ ( Et xt   Ft ut ) N (dzdt )), W d t d T , Z

xW = h  L (:, 2

W

, P;

n

))..

(0, T ;

m

pT = Qx

T

W

T

[ ³ (¢ Rt ( xtW ,h;v  xtW ,h;u ), xtW ,h;v  xtW ,h;u ²

=

W

¢ N t (vt  ut ), vt  ut ²  2¢ Rt xtW ,h;u , xtW ,h;v  xtW ,h;u ² 2¢ N t ut , vt  ut ² )dt ]  [2 [ ¢QxTW ,hh;;u , xTW ,h;v  xTW ,h;u ² ¢Q( xTW ,h;v  xTW ,h;u ), xTW ,h;v  xTW ,h;u ² ] On

the

W KY W

other W KX W

¢[

$W &WL and (W are and )W are Q u P

[

we have

hand,

applying

4

i =1

 ¢ ³ )W U W ] S G] YW  XW ² =

 ¢5[WW KX [WW KY  [WW KX ² GW @ From the conditions (A1), (A2), we have nonnegative, and

1W

5W 4

are

is positive, then we obtain

- Y   - X 

d

T

t 2 [ ³ ¢ Nt ut , vt  ut ² W

Z

d

the

,

d

W

Nt ut  Bt 'pt  ¦( Dti )cqti  ³ Ft 'r (t , z )S (dz ) = 0, (3) is

4[

to

W KX 7

= [ ³ (¢ Bt 'pt , vt  ut ²  ¢ ¦( Dti )cqti , vt  ut ²

Next, we will prove the above stochastic LQ problem has a unique optimal control and also provide the explicit expression of it. Theorem 1 Let assumptions (A1), (A2) be satisfied. Then XW is the unique optimal control for the initial-dataparameterized stochastic LQ problem with random jumps, if X satisfies the following condition:

here, W d W d 7  SW T UW ] solution(see Wu [13] of the BSDEP:

formula

SW ² and noting that S 7

T

Q u Q nonnegative symmetric bounded matrices, the control weighting matrix process 1 W is P u P positive symmetric bounded matrix and the inverse 1 W is also bounded.

L W

Itô’s

[¢Q QxTW ,h;h;u , xTW ,h;v  xTW ,h;u ²]

are

i =1

),

¢ N t vt , vt ²  ¢ Nt ut , ut ² )dt ] Q TW ,hh;;v , xTW ,h;v ²  ¢QxTW ,h;u , xTW ,h;u ² ]  [¢Qx

(A1) Assume that the matrix process

5W

m

(0, T ;

[ ³ (¢ Rt xtW ,h;v , xtW ,h;v ²  ¢ Rt xtW ,h;u , xtW ,h;u ²

=

m

and

2

J (v(˜))  J (u (˜))

) , which consists

bounded matrices. (A2) Assume that the state weighting matrix process

,W d t d T .

we denote by [ is the corresponding trajectory of system (2). Then, from the cost functional, by using some sorting technics, we have

-valued square integrable { t ,0 d t d T } -adapted of all processes. Throughout this paper, we make the following assumptions on the coefficients of the above problem.

Q u Q bounded matrices, %W 'WL

Z

i =1

W , h ;u T

W K Y W

bounded characteristic measure S G] , N is the compensated Poisson random measure of 1 . For more [ ] [16] [ ] and the details about diffusion terms we refer to [15], references therein. Denote by { t ,0 d t d T } the augmented natural filtration of the standard Brownian motion : and Poisson random measure 1 . The control XW 2

(4)

d

 Rt xtW ,h;u ]dt  ¦qti dWt i  ³ r (t , z ) N (dzdt )),

G

transposition of a matrix, N (dzdt ) = N (dzdt )  S (dz )dt , where 1 is a Poisson random measure on Z with the

belongs to the Banach space

Z

i =1

Proof: For any admissible control vt 

here, ^:W  :W :W c  d W d 7 ` is a G dimensional standard Brownian motion defined on some probability space (:, , P) , and ˜ c stands for the 

d

dpt = [ At 'pt  ¦(Cti )cqti  ³ Et 'r (t , z )S (dz )

¢ Bt 'pt  ¦( Dti )cqti  ³ Ft 'r (t , z )S (dz ), vt  ut ² )].

unique

i =1

11

Z

Thus,

XW

Before we prove this theorem, we introduce an useful lemma firstly. Lemma 1 Let assumptions (A1),(A2) be satisfied, if

satisfies

1 WXW  %W SW 

G

'WL cTWL  ³ ) cU W ] S G] ¦ = L



^ IW W K \ W W K [W W K KW W K 



furthermore, J (v(˜))  J (u(˜)) t 0,

W d W d 7 K  / :)W 3 5 Q `

i.e. XW is a optimal control. The uniqueness of the optimal control can be given similar to the work [14], it’s a classical parallelogram rule, we omit it, which completes the proof. From (3), we get the optimal control

is a solution to (6), then for any

d

time

i t

U

 ¢4IW 7 K IW 7 K ²`

(5) here, W d W d system is given by ­ ° ° ° ° ° ° ° ® ° ° ° ° ° ° ° ¯

7

X is the optimal control given by (5). Proof: Applying Itô’s formula to ¢\ W U K IW U K ²

where

. The so-called stochastic Hamilton d

dxt = ( At xt  Bt ut )dt  ¦(C x  D u )dWt i t t

i t t

\ W 7 K 4IW 7 K ¢QMW ,T (h), MW ,T (h)²  ¢\ W , U (h), MW , U (h)²

and

noting

i

i =1

)  ³ ( Et xt   Ft ut ) N (dzdt ), Z

ut =  N 1[ Bt 'pt  ¦( Dti )cqti  ³ Ft 'r (t , z )S (dz )],

(6)

(9)

 Rt xtW ,h;u ]dt  ¦qti dWt i  ³ r (t , z ) N (dzdt ), n

and

),, pT = QxTW ,h;u ,W d t d T .

U

L

tetrad [W SW TW UW , next, we will give and prove the stochastic Hamilton system (6) has a unique adapted solution. Theorem 2 Let assumptions (A1),(A2) be satisfied, then, for each fixed pair ( W K ) with W  >7 @ and

h  L (:,

W

, P;

n

W d t d T , h  L2 (:,

n

W , P;

W dt dt dT



T

³W

| [W ,t (h) |2 dt 

T

Z

here,

integrand

T

d

{³ (¢ N U uU , uU ²  ¢ BU 'pU , uU ²  ¢ ¦( DUi )cqUi , uU ² T

U

U

Since

i =1

XU

satisfies (3), we get the desired result.

Proof: Assumptions (A1) and (A2) imply the existence

XW . From Theorem (1), XW satisfy (3), SW TWL UW is a solution. The existence

of an optimal control W KX W

2

| KW ,,tt (h, z ) |2 S (dz )dt.

the

Z

and then [ part is proved. The uniqueness assertion is obvious once (7) is true. Therefore, it remains to prove that (7) holds. From Lemma (1), we have

W dt dt dT

³W ³

with

¢ ³ FU 'r ( U , z )S (dz ), uU ² )}.

))}. }. Moreover, we have

max | MW ,t (h) |  max | \ W ,t ( h) |

term

{³ (¢ RUMW , U (h), MW , U (h)²  ¢ N U uU , uU ² )d U

=

for some deterministic positive constant K, 2

the

¢QMW ,T (h), MW ,T (h)²}  ¢\ W , U (h), MW , U (h)²

) , the stochastic Hamilton system (6)

^ IW W K \ W W K [W W K KW W K] 

transposing

to both sides of the equality

U

has a unique adapted solution, which is a tetrad of stochastic processes parameterized by the initial data W K , denoted by

7

³U ¢1 UX U X U ²G U

equality provides the following equality.

It is a system of FBSDEP, the solution consists of a

2

(9)

i =1

¢5 UIW U K IW U K ² on the RHS to the LHS of the

Z

i =1

W KX

have

d

T

Adding

Z

d

W , P;

we

Z

i t

i =1

xW = h  L2 (:,

,

{³ (¢ BU 'pU , uU ²  ¢ ¦( DUi )cqUi , uU ² U

dpt = [ At 'pt  ¦(c )cq  ³ Et 'r (t , z )S (dz ) i t

that

 ¢ ³ FU 'r ( U , z )S (dz ), uU ²  ¢ RUMW , U (h), MW , U (h)² )d U}.

Z

i =1

d

U

=

d

7

)

( U ^³ ¢5 UIW U K IW U K ²  ¢ 1 UX U X U ² GU (8)

Z

i =1

U  >W 7 @, ¢\ W U K IW U K ²

ut =  N [ Bt 'pt  ¦( D )cqti  ³ Ft 'r (t , z )S (dz )], 1

{ t ,0 d t d T } -stopping

(7)

d K | h |2

12

T

³W ¢ RUMW U (h),) MW U (h)² d U  ,

,

T

d K{ | h |  (\ W ,W (h)˜ | h |)}. 2

(|\ W ,W (h) | ˜ | h |).

max | \ W ,t (h) |  dt dT W dt

T



³W ³

d

K

³W

d

K

Z

T

T

³W

T

³W

THE RICCATI EQUATION: CONNECTIONS BETWEEN THE RICCATI EQUATION AND THE HAMILTON SYSTEM

III.

Now we introduce the stochastic LQ problem with the Riccati equation. The way how to go from the stochastic Hamilton system to the Riccati equation will be established by a formal approach. Actually, the Riccati equation results from decoupling the stochastic Hamilton system, to derive the associated Riccati equation from the stochastic Hamilton system, we have a priori assumes that there is a semimartingale < of the form

| [W ,t ( h) | dt 2

| KW ,t (h, z ) |2 S ( dz ) dt

| RUMW , U (h) |2 d U  K

| QMW ,T (h) |2

t

¢ RUMW , U (h), MW , U (h)² d U

Yt = Y0  ³ Y1 ( s )ds  ³ 0

 K ¢QMW ,T (h), MW ,T (h)². T

³W

W dt dt dT

 d

T

³W ³

Z

| KW ,t (h, z ) | S ( dz ) dt 2

 (11)