Global Asymptotic Stability for a New Class of Neutral Neural Networks

: In the present world, due to the complicated dynamic properties of neural cells, many dynamic neural networks are described by neutral functional differential equations including neutral delay differential equations. These neural networks are called neutral neural networks or neural networks of neural-type. The differential expression not only defines the derivative term of the current state but also explains the derivative term of the past state. In this paper, global asymptotic stability of a neutral-type neural networks, with time-varying delays, are presented and analyzed. The neural network is made up of parts that include: linear, non-linear, non-linear delayed, time delays in time derivative states


Introduction
Since Hopfield proposed a neural network model which was named after him in 1984, the Hopfield neural network has been applied to various fields, such as combinatorial optimization [1][2][3][4], image processing [5,6], pattern recognition [7], signal processing [8], and communication [9]. As for one of the recurrent neural networks, the Hopfield neural network has been continuously investigated in the past decades [10][11][12][13][14]. In fact, due to the finite speeds of the switching and transmission of signals in a network, time delays exist in a working network and thus should be incorporated into the model equation of the network. Neural networks in the presence of time delays have received a great deal of attention in recent literature [15][16][17][18][19][20][21][22].
Due to the complexity of nerve cells in the real world, people have found that many existing neural network models cannot easily or accurately describe the characteristics of the neural response process. So, the different information of the past states should be incorporated to describe such a complex neural reactive dynamic system. Since a time derivative of the state is a function of time, delay parameters need to be introduced into the time derivatives of states of the system. The neural network model containing time delays in the time derivatives of states is called a delayed neutral-type neural network. Neutral-type neural networks are a special type of time-delayed neural networks, in which the information relating to derivatives of the past states is introduced to describe the system dynamics [23][24][25][26][27]. Further neural networks and applications can be refereed to other studies [28][29][30]. Neutral-type neural networks are usually described by the following ordinary differential equations: In 2008, C. Bai investigated system (1) and has given sufficient conditions ensuring the existence and global exponential stability of the continuously differentiable, almost periodic, solution by using a fixed point theorem and differential inequality technique [25]. In 2012, Orman has obtained sufficient conditions for the existence, uniqueness and global asymptotic stability of the equilibrium point for the class of neutral-type systems (1) [26]. In 2013, S. Lakshmanan et al. have studied the following neutral delay Hopfield neural network model: S. Lakshmanan and his group developed several LMI-based criteria to guarantee the asymptotic stability of neural networks. They considered the delay partitioning on both discrete and distributed delays and triple integral term for the neutral delay for the development of the derived results, and by employing a new type of Lyapunov-Krasovskii functionals, new delay-dependent stability criteria were derived [27].
Motivated by the idea and work of S. Lakshmanan et al, in this paper, the following neutral delay neural network model will be studied:  Let the activation functions ( ) i g i satisfy the following conditions: (H) The activation function is bounded and satisfies, Because the activation functions satisfies (H), by using the Brouwer fixed-point theorem, it would be inferred that the neural network model (3) has an unique equilibrium point y* for each I; it is similar to the proof provided by the literature [25].
The equilibrium point y* in (3) is shifted to the origin by letting x(t)=y(t)-y*, and then the system (3) can be transformed into the following form by is the state vector of the transformed system, and ( ( )) From the assumption (H), the transformed neuron activation function satisfies

Main Results
In this section, the sufficient conditions for global asymptotic stability of system (4) would be established. Throughout this paper, the following notations will be made use of: i.
ii. iii. The superscript T denotes the transpose of the matrix and the notation X Y ≥ (respectively, X > Y), (where X and Y are symmetric matrices), means that X-Y is positive semi-definite (respectively, positive definite).
( ) m P λ and ( ) M P λ denote the minimum eigenvalue of P and the maximum eigenvalue of P, respectively. Diag{· · · } denotes the block diagonal matrix. I denotes identity matrix. By using notations (I), system (4) can be rewritten as: Next, a theorem, which provides sufficient conditions for global asymptotic stability of the system (4), will be established.

t s Qx t s ds
By using assumption (H), we derived (6), it is easy to understand that the signs of f i (x i (t)) and x i (t) are the same (both positive, or both negative By using the notations in (I), (7) can be rewritten as    (1 ) (1 ) According to the known conditions, Ω is negative definite, so ( ( )) V x t ɺ is negative definite, thus system (4) is globally asymptotically stable. This completes the proof.

A numerical Example
In this section, an example to test the validity of results would be presented. Consider the following model with two neurons: where, The matrix Ω is negative definite. According to the result of Theorem 1, system (8) is globally asymptotically stable at its equilibrium point.

Conclusion
This paper investigated the problem of global asymptotic stability of delayed neutral neural networks. These neutral delayed neural networks are formulated by neutral delay differential equations. They not only include the derivative term of the current state but also the derivative term of the past state.
A new Lyapunov-Krasovskii-type technique is employed to develop sufficient conditions of global asymptotic stability of the neural networks. All these stability criteria are expressed in the form of linear matrix inequalities, which are solvable by the use of the Matlab software.
Finally, a numerical example is presented to illustrate the effectiveness and usefulness of the obtained results. Our results are suited to investigate the neural networks containing activation functions with both the state derivatives and multiple delays.