Solving Definite Quadratic Bi-Objective Programming Problems by KKT Conditions

A bi-objective programming has been proposed for dealing with decision process involving two decision makers. In this paper, a bi-objective programming problem in which both objective functions are definite quadratic is considered. The feasible region is assumed to be a convex polyhedron. Solution methods namely; using KKT Conditions is developed. Illustrative examples for the method are presented and theorems and facts to support the method are also discussed. The solution of the examples are obtained using a LINGO (15.0) mathematical software.


Background of the Study
A general optimization problem is to select n decision variables , , … , from a feasible region in such a way as to optimize (minimize or maximize) the objective function ( , , … , ) of the decision variable. The problem is called a non-linear programming problem (NLP) if the objective function is non-linear and/or the feasible region is determined by non-linear constraints.
Interest in non-linear programming has grown simultaneously with the growth of linear programming. Kuhn and Tucker developed a necessary and sufficient condition for the existence of an optimal solution to a non-linear programming problem, which is a basis for further development in the field.
A simple subclass of non-linear programming problem is a one in which the objective function is non-linear but the constraints are all linear. This gives rise to a variety of problems depending up on the nature of the objective function.
When the objective function is given by = + + , the problem is called a quadratic programming problem.
Given linear constraints, optimal solution of a general nonlinear programming problem may not always exist at an extreme point. In fact, a study of the nature of the objective function is necessary to predict this.
A numerical function defined on a convex set ⊂ is said to be convex if for each , ∈ and ∈ [0,1] such that ( + (1 − ) ≤ ( ) + (1 − ) ( ). A numerical function defined on a set ⊂ is concave on S if and only if the function − is convex in on S.
A numerical function is said to have a local or relative maximum at ∈ if ∃ > 0 such that is defined for all points of ‖ − ‖ < $% ( ) ≤ ( ) for all points. is said to have a global maximum at ∈ if for all ∈ , ( ) ≤ ( ). Algorithms for solving a general mathematical programming problem approach systematically to a local optimal solution. Under appropriate assumptions, a local optima can be shown to be a global optima. For instance, when the constraint set is a convex polyhedron and the objective function is convex (concave), the local minimum (maximum) is also a global minimum (maximum) [5].
Single objective decision making method reflect an earlier and simpler era. The world has become more complex as one enters the information age. One can find that almost every important real world problem involves more than one objective. Since the goals or objectives might be conflicting with each other, no single optimal solution can be found and the optimization problem becomes finding the best compromise solutions [5]. Programming Problems by KKT Conditions The general Multi-Objective Programming problem [9] is defined as A bi-objective problem formulated as follows: and are the first and the second objective functions respectively and is constraint set.
Quadratic Programming Problem A quadratic program (QP) is an optimization problem wherein one either minimizes or maximizes a quadratic objective function of a finite number of decision variable subject to a finite number of linear inequality and/or equality constraints. A quadratic function of a finite number of variables = ( , , … , ) is any function of the form ( ) = 5 + ∑ . 7 78 7 + ∑ ∑ 9 (7 78 (8 ( 7 (3) Using matrix notation, this expression simplifies to where = : . . ⋮ . < and = : 9 9 ⋯ 9 9 9 ⋯ 9 ⋮ 9 ⋮ 9 ⋱ ⋯ ⋮ 9 < without loss of generality, assume that is symmetric matrix since and so it is free to replace the matrix by the symmetric matrix @A@ B . Henceforth, the matrix is symmetric. The general quadratic programming problem can be written as

Statement of the Problem
Different researchers used different methods for solving BOQP problems, but the method that they used is so lengthy and in addition to this, most of them use linearization technique, this creates an approximation error. So, the researcher was trying to answered the question that; can we solve definite quadratic bi-objective programming problem by KKT condition easily.

Significance of the Study
This study will give a direction towards definite quadratic bi-objective programming problems to anybody or any organization to solve their real life problems (or whatever any problem) which are modeled as definite quadratic biobjective programming with constraint region determined by linear constraints.

Objective of the Study
The general objective of this study is to solve the definite BOQP problem with constraint region determined by linear constraints by KKT conditions.
The study is intended to explore the following specific objectives a) To discuss definite quadratic bi-objective programming problem; b) To discuss KKT conditions, and to solve the definite BOQP.

Research Methodology
This research work involve in collecting the information about solving definite quadratic bi-objective programming problem in which KKT conditions focused, from optimization books, optimization journals, and searching other materials and references from the internet. The collected material and the techniques or methods used by authors in relation to KKT conditions are examined. a) Important theorems and facts to support the methods are discussed. b) The definite quadratic bi-objective programming problem is changed into single programming problem by using KKT conditions. Then after changing in to single programming problem LINGO (15.0) was applying to solve the problem.
i. positive definite if and only if L > 0, L > 0, … , L > 0 that is all principal minors are strictly greater than zero, ii. negative definite if and only if L < 0, L > 0, L P < 0, … that is all principal minors alternate in sign starting with negative one (or the value of the Q RS leading principal minor has the sign of (−1) T ), iii. positive semi definite if and only if all principal minors are greater than or equal to zero, iv. negative semi definite if and only if all principal minors of odd degree are less than or equal to zero, and all principal minors of even degree are greater than or equal to zero. A bi-objective programming problem, which both the objective functions are definite quadratic is called definite quadratic bi-objective program. Consider the following definite quadratic bi-objective programming problem. Therefore, can be written as W P W + (9 + 2 ) W + + Y . If a constant term exists, it is dropped from the model since it plies no role in the optimization step. So is fixed prior to the maximization of , because of the fact that the second objective function the decision variable W only.
The second objective problem is equivalent to Subject to IW ≤ − G $% W ≥ 0 Therefore, problem BOQP becomes equivalent to In this study, the researcher assumed that bi-objective programming problem is considered; the objective functions are convex with positive semi definite matrices P and Q on minimization type of problems and the second objective function controls the decision variable W only. On the other hand, the researcher was assumed to be concave objective function with negative semi definite matrices P and Q on maximization type of problems and the second objective function controls the decision variable W only with convex polyhedron region determined by linear constraints.
The aim of doing this research was solving the definite quadratic bi-objective programming problems by using KKT conditions and lingo (15.0) mathematical software. So, it is obligate that knowing the concept of Karush-Khun-Tucker Conditions.
Kuhn and Tucker developed the necessary and sufficient conditions for the NLP problem by assuming , d 7 and ℎ ( are differentiable. The general NLP problem is given by Observe that for the minimization problem, all one needs to do is to change the minus sign in the Lagrange to plus, because finding a minimum for is the same as finding a maximum for − . Note that the above KKT condition is a necessary condition (in general) but is also sufficient for convex problems.
Theorem 3.1: (Kuhn-Tucker Necessary Theorem) Consider the NLP problem and let , d 7 and ℎ ( be differentiable functions and * be a feasible solution to the NLP problem. Let r = s,: d 7 ( ) = 0t.
Proof: The proof of theorem 3.1 found on [7]. The conditions ∇d 7 ( * ) 0] , ∈ r $% ∇ℎ ( ( * ) 0] g = 1, 2, … , Q are linearly independent at the optimum is known as the constraint qualification and this constraint qualification holds in the following cases: i. when all the constraints are linear ii. when all the inequality constraints are concave functions and the equality constraints are linear and there exists at least one feasible that is strictly inside the feasible region of the inequality constraints. In other words, there exists an such that Note that when the constraint qualification is not met at the optimum, there may not exist a solution to the Kuhn-Tucker problem. Therefore, do not apply the Kuhn-Tucker optimality conditions when the constraint qualification is not met.
We know that a quadratic programming problem is a special type of the NLP problem that one either minimize or maximize a quadratic objective function subject to linear constraints. Therefore, we can apply the Kuhn-Tucker optimality conditions because the constraint qualification is fulfilled since the constraints are linear. Depending on this reason, let us consider the following optimality conditions for the quadratic programming problem.
Consider the quadratic programming problem ). / G ≤ + 0] 0 ≤ A pair ( , i) ∈ 2 is said to be a karush-kuhntucker pair (or KKT pair) for the quadratic program if and only if the following conditions are satisfied:

Theorem 3.2: (Necessary Conditions for Optimality in Quadratic Programming)
Consider the quadratic programming problem . If w ∈ solves , then there exists a vector i * such that ( w , i * ) is a KKT pair for .
Before going to solve the optimal solution of the definite quadratic bi-objective programming problem, we have to check whether the solutions of a given program exist, and unique or not. Because of this reason, consider the following existence and uniqueness theorem. If : → is continuous function, and is a non-empty, closed and bounded subset of , then there exists $% E% )*.ℎ /ℎ$/ ( ) ≤ ( ) ≤ ( ) for all ∈ . Proof: The proof of theorem 3.3 found on [1]. Take the idea of Weierstrass Theorem on equation D( ) of the above lower objective function. Let be a feasible set. Suppose non-empty, closed, bounded and is a convex objective function, then there exists a global optimum point for .
To assure that KKT optimality conditions are both necessary and sufficient for obtaining the global optimum of the inner problem, consider the following theorem.
where y is Lagrange multiplier By considering (14) as a constraint function for the first objective function programming problem, we get €: ( , W) = $ + + W + ( , W )D ? W C 1,X 234 G + IW ≤ y (G + IW) = y 2 + 2 P W + 9 = I y , W, y ≥ 0 We have that (€) is a single level maximization problem with one quadratic objective function and the constraints are nonlinear. Now we can solve problem € by Lingo (15.0) software because of the fact that this software is appropriate for solving the optimal solution for such likes of problems.
Therefore, the following necessary and sufficient optimality for IU D (theorem 3.6) is clarified that the point that solves the problem (€) is an optimal solution to the quadratic programming problem IU D Note: An optimal solution is a feasible solution that has a maximum objective function value for maximization type of problem or a minimum objective function value for minimization type of problems. Let = •( , W): G + IW ≤ ; , W ≥ 0‚, and S assumed to be closed and bounded.
For problem IU D , the first objective function solution space is given by which implies that ( * , W * , y * ) is a feasible solution to problem €. We will show that it is an optimal solution to €. If ( * , W * , y * ) is not an optimal solution to problem €, then there exists ( w , W w , y z ) satisfying (16)-(19) and ( w , W w ) ≥ ( * , W * ). As is a negative semi-definite problem D ‡ ( ) is a concave maximization problem. Therefore (16)-(19) are sufficient conditions for W w to be an optimal solution. Hence W w ∈ W( w ) and ( w , W w ) ≥ ( * , W * ) . This contradicts that ( * , W * ) is optimal to problem IU D . Therefore, ( * , W * , y * ) is optimal to problem € . Conversely, assume that there exists a y * ∈ 2 such that ( * , W * , y * ) solves €. We will show that ( * , W * ) solves IU D . As shown above W * ∈ W( * ) . For any ( , W) ∈ ̅ , W ∈ W( ) implies that W solves D( ). So, by Kuhn Tucker necessary conditions, there exists a y ≥ 0 such that (16)-(19) hold. Thus ( , W, y) is feasible to € and hence ( * , W * ) ≥ f (X, Y), which proves that ( * , W * ) solves IU D .
The stepwise description of the method that of solving the optimal solution of definite quadratic bi-objective programming problem using KKT conditions is as follows: Given a definite IU D; ( , W) = $ The lagrangian function for the second objective function is {( , W, y , y , y P ) = + 5W − W − y ( + W − 5) −y (3 + 2W − 9) − y P (2 + W − 6) First, apply KKT conditions on the second objective function by considering X as a fixed. , W, y , y , y P ≥ 0 , where y , y , y P are langrage multipliers.