Gradient Algorithm in Subspace Predictive Control

: In this paper, subspace predictive control strategy is applied to design predictive controller. Given the state space model, the output estimations corresponding to the predictive output is derived to be one explicit function of the measured input-output data. Then using these output estimations, the problem of designing predictive controller is formulated as one optimization problem with equality and inequality conditions. In order to solve this constrain optimization problem, we use dual decomposition idea to change the original constrain optimization problem into an unconstrain optimization problem. So the classical gradient algorithm is put forth to solve the primal dual optimization problem. The problem of designing dual decomposition controller is studied for subspace predictive control strategy under fault condition. For state space equation with fault condition, we establish one function form between fault and residual using only input-output measured data sequence, and construct one least squares optimization problem to obtain fault estimation. The statistical property about residual is analyzed based on our derived output prediction, then the Kronecker product is used to derive the detailed structure corresponding to residual vector at every time instant. After substituting our output prediction into objective function of predictive control, one quadratic programming problem with equality and inequality constraints is considered. For solving this constrained optimization problem, fast gradient method is not suited for this complex optimization problem, as one regularization term is added in our objective function. So in order to solve this complex quadratic optimization problem, we propose a dual decomposition idea so that this dual decomposition idea can convert the former constrained optimization into unconstrained optimization, then one nearest neighbor gradient algorithm is given to solve its optimal value.


Introduction
Subspace predictive control is from the idea of subspace identification, whose goal is to construct one mathematical equation corresponding to the considered plant using only input-output measured sequence. During subspace identification process, each matrix in state space equation is identified by using the basic matrix singular value decomposition strategy, as this basic strategy is applied to decompose a matrix constituted by past and future input-output measured sequence. Instead subspace predictive control can construct the future prediction of output directly from input-output measured sequence, and avoid constructing the state space equation, i.e. state space equation is not needed to be identified from estimated state sequence.
Subspace predictive control is one of special data driven control method, due to its combination with system identification and predictive control and its ability to obtain output prediction directly by measured sequence, which is very important in predictive control theory. The basic output prediction in subspace predictive control is proposed in [1], and [2] compares this output prediction and another value coming from iterative correction tuning control, and the equivalence between these two output predictions can be guaranteed through introducing one pre-filter. In [3], a novel subspace predictive control algorithm based on subspace identification was studied to solve actuator saturation limitations in a range of active vibration and noise control problems, and Meanwhile the subspace predictive control permitted limitations on allowable actuator saturation. An upper bound of maximum number of possible iteration steps is derived in subspace predictive control, and the proposed subspace predictive control is realized in an example of helicopter [4]. When imposing the lower and upper bounds constraint conditions on the faults, a fast gradient method was applied to solve this problem, based on fault estimations the subspace predictive control was proposed to solve an optimization problem with linear matrix inequality constraints.

Problem Description
Consider the following stochastic discrete time state space model.
where equation (1) where innovation ( ) e k is defined as that.
where K is the Kalman gain matrix, innovation ( )  (2) is that the strong consistent estimation of innovative form (2) can be identified in closed loop condition based on subspace identification theory [6]. In classical Kalman filter theory [7], ( ) u k and ( ) y k are two deterministic variables, then after substituting the definition of innovation ( ) e k into state space equation with innovation form, one closed loop input-output relation is obtained.
where ( ) u k and ( ) f k can be regarded as two external and deterministic input signals, and in equation (4), ( ) f k denotes the given fault. When output trajectory is given, the expected output trajectory is used to quality the output data at future time instant. An approximate choice of the control input is obtained through minimizing the measurement error, i.e. it leads to one problem of designing predictive controller.
Define state, input and fault as that respectively.
x k u k and ( ) y k are all bounded at any time instant k .

Output Estimation in Subspace Predictive Control
To study subspace predictive control, the first step is to give the future output estimation at future time instant, and the output estimation can be computed by using equation (6). As the residual is generated in more than one sample sliding horizon, i.e. sliding horizon level is [ ] , L is the output horizon level. Similar to derivation of equation (9), the time index k in equation (9) is replaced by time index 1, 1,....... k L k L k − + − + , then we obtain one column vector as.
Similarly define block vector Let all block Hankel matrices be that respectively.
Combining input-output within past sliding window as Collecting input-output measured data sequence within past sliding window as.
Using above defined notations, a simplified form corresponding to equation (13) is obtained.
Then equation can be rewritten as. , , ,

Dual Decompose for Subspace Predictive Control
As subspace predictive control belongs to predictive control field [8], so the future control input is solved as an optimal solution of an optimization problem. Assume the expected output trajectory is priori known as.
The common used quadratic objective function is given in predictive control field.
where two matrices 1 Q and R are positive definite weight matrices, and the decision variables are combined as follows.
But in objective function, only the second term is the explicit function of future control input, and the first term includes Above , k L p z − includes past input-output measured data sequence, but , k L p z − also includes future control input , k L u , so we rewrite above output predictions as follows.
Transferring and reformulating terms to give , , So the explicit relation between future output predictions and future control input , Introducing three matrices to simplify above equation.
The construction of three matrices 1 2 3 , , Λ Λ Λ is to multiply some certain matrix on Reformulating above equation to get.
For solving optimization problem (18), we proposed a fast gradient algorithm to solve a special case with limited input amplitude in our published paper [8]. Here we extend that special case to more general case, which includes equality and inequality constrain conditions. To solve an optimization problem about predictive controller with equality and inequality constrain conditions, the dual decomposition is used to convert the former constrained optimization into an unconstrained optimization.
To rewrite optimization problem (18) as its more general form, let ( ) One regularization term is added in equation (18), and its advantage is to guarantee the decision variables not abrupt in the whole optimization process, i.e. one constrained optimization problem is given as follows.
Introducing Lagrange multiplier vector 1 2 3 , , µ µ µ to obtain the Lagrange function of the above dual problem.
The dual problem is that Define negative dual function as. Nearest neighbor gradient algorithm is used to solve dual problem (41), and its iterative process is given as follows.
where t denotes the iteration step, µ Ρ is Euclidean projection operation of µ . New iteration value 1 t µ + is a negative gradient projection on the basis of the last iteration value. The control input of the original optimization problem is set.