American Journal of Applied Mathematics
Volume 3, Issue 3, June 2015, Pages: 81-89

Weighted Method Based Trust Region-Particle Swarm Optimization for Multi-Objective Optimization

M. A. El-Shorbagy

Department of Basic Engineering Science, Faculty of Engineering, Menoufiya University, Shebin El-Kom, Egypt

Email address:

To cite this article:

M. A. El-Shorbagy. Weighted Method Based Trust Region-Particle Swarm Optimization for Multi-Objective Optimization. American Journal of Applied Mathematics. Vol. 3, No. 3, 2015, pp. 81-89. doi: 10.11648/j.ajam.20150303.11

Abstract: In this study, a hybrid approach combining trust region (TR) algorithm and particle swarm optimization (PSO) is proposed to solve multi-objective optimization problems (MOOPs). The proposed approach integrates the merits of both TR and PSO. Firstly, the MOOP converting by weighted method to a single objective optimization problem (SOOP) and some of the points in the search space are generated. Secondly, TR algorithm is applied to solve the SOOP to obtain a point on the Pareto frontier. Finally, all the points that have been obtained by TR are used as particles position for PSO; where homogeneous PSO is applied to get all nondominated solutions on the Pareto frontier. In addition, to restrict velocity of the particles and control it, a dynamic constriction factor is presented. Various kinds of multiobjective (MO) benchmark problems have been reported to show the importance of hybrid algorithm in generating Pareto optimal set. The results have demonstrated the superiority of the proposed algorithm to solve MOOPs.

Keywords: Multi-Objective Optimization, Trust Region algorithm, Particle Swarm Optimization, Pareto Optimal Set, Weighted Method

1. Introduction

TR method generate steps with the help of a quadratic model of the objective function, define a region around the current iterate within which they trust the model to be an adequate representation of the objective function and then choose the step to be approximate minimzer of the model in this region. If a step is not acceptable, they reduce the size of the region and find a new minimize. In general, the direction of the step changes whenever the size of the TR is altered [1,2].

Because of the boundedness of the TR, TR algorithms can use non-convex approximate models. This is one of the advantages of TR algorithms comparing with line search algorithms. TR algorithms are reliable and robust, they can be applied to ill-conditioned problems, they have very strong convergence properties and have been proven to be theoretically and practically effective and efficient for unconstrained and equality constrained optimization problems [3,4]. Also, The TR algorithm has proven to be a very successful globalization technique for nonlinear programming problems with equality and inequality constraints [5].

For MOOPs, Kim and Ryu [6] developed an iterative algorithm for bi-objective stochastic optimization problems based on the TR method and investigated different sampling schemes. Their algorithm does not require any strong modeling assumptions and has great potential to work well in various real-world settings. In addition, in [7] El-Sawy et al. proposed a new algorithm is proposed to solve MOOPs through applying the TR method based local search (LS) techniques.

On the other hand, PSO is an Evolutionary Computational (EC) model which is based on swarm intelligence. PSO inspired by the research of the artificial livings and is developed by Kennedy et al. [8]. Similar to EC techniques, PSO is also an optimizer based on population. The system is initialized firstly in a set of randomly generated potential solutions and then performs the search for the optimum one iteratively. Whereas the PSO does not possess the crossover and mutation processes used in EC, it finds the optimum solution by swarms following the best particle. Compared to EC, the PSO has much more profound intelligent background and could be performed more easily. Based on its advantages, the PSO is not only suitable for science research, but also engineering applications, in the fields of evolutionary computing, optimization and many others.

Multi-Objective Optimization (MOO) has been one of the most studied application areas of PSO algorithms. Number of approaches have been utilized and/or designed to tackle MOOPs using PSO. A straight forward approach is to convert MOO to a SOOP. Parsopoulos and Vrahatis [9] presented a first study of the performance of the PSO in MOOPs. In recent years, many particle swarm algorithms were proposed for solving MOOPs. For instance, Mousa et al. [10] proposed LS based hybrid PSO algorithm for MOOPs. A comprehensive survey of the state-of-the-art in Multi-Objective (MO) particle swarm optimizers can be found in [11] where different techniques reported in Multi-Objective Particle Swarm Optimization (MOPSO) development have been categorized and discussed.

This study presents a hybrid algorithm combining TR and PSO for solving MOOPs, which can overcome the disadvantage of the TR method (such as restrictions on the TR radius) and solve a class of MOOPs efficiently. It is a new algorithm that performs random searching and deterministic searching for solving MOOPs. In the proposed algorithm, MOOP is converting to SOOP, TR is used to obtain a point on the Pareto frontier and homogeneous PSO is applied to get all the points on the Pareto frontier.

2. Multi-Objective Optimization

The MOO is a very important research area in engineering studies because real world design problems require the optimization of a group of objectives. Thanks to the effort of scientists and engineers during the last two decades, particularly the last decade, a wealth of MO optimizers have been developed and some MOOPs that could not be solved hitherto were successfully solved by using these optimizers [12]. The general minimization problem of q objectives can be mathematically stated as:


where,  is the j-th objective function ,  is the i-th inequality constraint,  is the e-th equality constraint and  is the vector of optimization or decision variables; where n the dimension of the decision variable space. The MOO problem then reduces to finding an x such that  is optimized. Since the notion of an optimum solution in MOOP is different compared to the SOOP, the concept of Pareto dominance is used for the evaluation of the solutions. This concept formulated by Vilfredo Pareto is defined as follows [13]:

Definition 1. (Dominance Criteria). For a problem having more than one objective function (say, ), any two solution  and  can have one of two possibilities, one dominates the other or none dominates the other. A solution  is said to dominate the other solution , if both the following condition are true.

1.  The solution  is no worse (say the operator  denotes worse and  denotes better) than  in all objectives, or  for all  objectives.

2.  The solution  is strictly better than  in at least one objective, or  for at least one .

If any of the above condition is violated, the solution  dose not dominates the solution .

Definition 2. (Pareto optimal solution).  is said to be a Pareto optimal solution of MOOP if there exists no other feasible x such that,  for all  and  for at least one objective function .

3. Weighted Method

Weighted method is an intuitive way for MOO. In this approach, different objectives are weighted and summed up to one single objective. By using weighted method [14], we convert the constrained MOOP (1) to SOOP. This method consists of creating a single-objective model by weighing the q objective functions by assigning a weight to each the functions. Through the weighted method the MOOP (1) is formulated as:


where, are non-negative weights with . The weights are determined as follows:


where, random1, random2,…, randomq, are non-negative random integers. The following conclusions can be drawn for the weighted method:

Ÿ   It is computationally very efficient

Ÿ   It is conceptually very easy to understand

Ÿ   Only one solution can be obtained in one run, assuming that the Pareto front is convex

Ÿ   The solutions located in the concave region of the Pareto front cannot be obtained

4. Particle Swarm Optimization

PSO is an evolutionary computation technique motivated by the simulation of social behavior [7]. The PSO algorithm is a population based metaheuristic algorithm that applies two approaches of global exploration and local exploitation to find the optimum solution.

The algorithm is initialized by creating a swarm, that is, population of particles, with random positions. Every particle is shown as a vector in a n-dimensional search space where  and  are the position and velocity, respectively, and  is the personal best position (pbest) found by the i-th particle. In addition, the best position obtained by the entire population is computed to update the particle velocity. Based on  and , the next velocity and position of the i-th particle are computed using (4) and (5) as follows (the superscripts denote number of the iteration t):



where,  and N is the size of the population; w is the inertia weight; c1 and c2 are two positive constants, called the cognitive and social parameter respectively; r1 and r2 are random numbers uniformly distributed within the range [0,1].

5. The Proposed Approach

In the following, the proposed algorithm is presented. The proposed algorithm contains three stages initialization stage, TR stage and PSO stage.

5.1. Initialization Stage

Ÿ  Initialize N points in the search space

Ÿ  Converting MOOP to SOOP

Ÿ  The non-negative weights (w1,…,wm) is generated using (3)

Ÿ  Construct the weighted problem (2)

Ÿ  Converting the general nonlinear optimization problem (2) to equality Constrained problem

Ÿ  Following Dennis et al. [15], we define the indicator matrix , whose diagonal entries are


Ÿ  Using this matrix, the Problem defined in (2) can be transformed to the following equality constrained optimization problem:


The above problem can be rewritten as:


where, .

The matrix  is discontinuous; however, the function  is Lipschitz continuous and the function  is continuously differentiable [15].

Ÿ  The Lagrangian function associated with problem defined in (8) is given by:


where,  is the Lagrange multiplier vector associated with equality constraint .

Ÿ  The augmented Lagrangian is the function:


where  is a parameter usually called the penalty parameter.

5.2. TR Stage

The detailed description of TR algorithm for solving problem (8) is presented.

The reduced Hessian approach is used to compute a trial step dk. In this approach, the trial step dk is decomposed into two orthogonal components; the normal component  and the tangential component. The trial step dk has the form, where Zk is a matrix whose columns form an orthonormal basis for the null space of .

We obtain the normal component  by solving the following TR sub-problem:


For some zÎ (0,1).

Given the normal component, we compute the tangential component  by solving the following TR sub-problem:


Once the trial step is computed, it needs to be tested to determine whether it will be accepted or not. To do that, a merit function is needed. We use the augmented Lagrangian function (10) as a merit function. To test the step, we compare the actual reduction in the merit function in moving from  to  versus the predicted reduction.

The actual reduction in the merit function is defined as:


The predicted reduction in the merit function is defined as:


where, .

If  where  is a fixed constant, then the step is rejected. In this case, the radius of the TR  is decreased by setting  where , and another trial step is computed using the new TR radius.

If  where  then the step is accepted and set the TR as .

If  then the step is accepted and set the TR as . Finally, the algorithm is terminated when either  or  for some .

5.3. PSO Stage

In this stage a homogeneous PSO for MOOP is proposed with a dynamic constriction factor to restrict velocity of the particles and control it [16]. In homogeneous PSO one global repository concept is proposed for choosing pbest and gbest, this means that each particle has lost its own identity and treated simply as a member of social group. The procedure of the PSO stage is as follows.

Step1: Initialization

·         All non-dominated points (which obtained by applying TR stage) chosen as particles position.

·         PSO parameters such as velocity , inertia weight w and learning rates c1 and c2 are set up.

·         Store non-dominated particles in Pareto repository. If the specific constraint doesn’t exist for a repository, the size of the repository is unlimited.

Step2: Evaluation

·         Evaluate the MO fitness value of each particle and save it in a vector form.

Step3: Floating

·         Two optimal solutions are chosen randomly for pbest and gbest from the repository.

·         Determine the new position of each particle using (4) and (5).

Step4: Repairing of Particles

·         Where, the particle i start at the position with velocity  in the feasible space, the new position  in Fig. 1 depends on velocity .

·         To restrict (control) the particle’s velocity, a modified constriction factor (i.e., dynamic constriction factor) is presented to keep the feasibility of the particles. E.g., Fig. 1 shows the movement of the particle i through the search space without any control factor (dashed line) also with a modified constriction factor (solid line). Where the particle i start at position  with velocity  in the feasible space, the new position  depends on velocity  making the particle lose its feasibility, so we introduce a modified constriction factor:


where, t is the age of the infeasible particle (i.e., how long it is still infeasible) and it is increased with the number of failed trials to keep the feasibility of the particle. The new modified positions of the particles are computed as:


Figure 1. The movement of the particle i through search space.

·         For each particle, the feasibility is checked, if it is infeasible, the c parameter is implemented to control its position and velocity.

Step5: Selection and Update the Repository

·         Check the Pareto optimality of each particle. If the fitness value of the particle is non-dominated when it compared to the Pareto optimal set in a repository, save it into the Pareto repository.

·         In the Pareto repository, if a particle is dominated from new one, then discard it.

Step6: Repeat

·         Repeat again step 2 to step 5 until the number of iteration reaches to a given t.

The PSO stage algorithm needs at least two Pareto solutions in the first generation to avoid premature convergence. Figure 2 shows the flow chart of proposed algorithm.

Figure 2. Flow chart of proposed algorithm.

6. Numerical Results

In order to validate the proposed algorithm, several benchmark problems are solved which are reported in the literature [17].

All test problems have been executed on an Intel ®, core™ i3, 3 GHz processor. The proposed approach is coded using MATLAB programming language. The parameters adopted in the implementation of the proposed algorithm are listed in Table 1.

For evaluating the performance of the proposed approach nine well-known MO benchmark problems are used. Each test problem consists of two objective functions with/without constraints and has continuous/discrete with convex/nonconvex Pareto front. The following test problems for study are considered [17]:

Table 1. The parameter adopted in the implementation of the proposed algorithm.

Parameter Value Parameter Value
N 20-50 Dmax 105D0
e1, e2 10-7 Dmin 10-3
t0 0 PSO iteration 50-200
t1 2 w 0.6
t2 0.25 c1 2.8
t3 0.25 c2 1.3
D0 (1,1.5)´Dmin t 15

Test Problem-1 (Continuous Convex):

Subject to:

Test Problem-2 (Continuous Convex):

Subject to:

Test Problem-3 (Discrete):

Subject to:

Test Problem-4 (Continuous Convex):

Subject to:

Test Problem-5 (Continuous Convex):

Subject to:

Test Problem-6 (Continuous Convex):

Subject to:


Test Problem-7 (Continuous Convex):

Subject to:


Test Problem-8 (Discrete):

Subject to:

Test Problem-9 (Continuous Non-convex):

Subject to:


7. Results and Discussion

As shown in Figs. 3 -11, the proposed approach is able to obtain the Pareto front of these kinds of problems.

For the test problems 1-5 (Figs. 3-7) we can see that our approach able to find well distribution of the Pareto-optimal curve in the objective space. Also, it is observed that the resulting Pareto front is smooth, uniformly distributed and it achieves very good solutions at the two ends of the curve.

The test problem 6 (Fig. 8) and the test problem 7 (Fig. 9) are fairly simple in that the constraints may not introduce additional difficulty in finding the Pareto-optimal solutions. It can be observed that our approach performs well and have a dense sampling of solutions along the true Pareto optimal curve.

The test problem 8 (Fig. 10) and the test problem 9 (Fig. 11) are relatively difficult. The constraints in the test problem 8 make the Pareto-optimal set discontinuous. The constraints in the test problem 9 divide the Pareto-optimal set into five regions.

Figure 3. Pareto front for the Test Problem (1).

Figure 4. Pareto front for the Test Problem (2).

Figure 5. Pareto front for the test problem (3).

Figure 6. Pareto front for the test problem (4).

Figure 7. Pareto front for the Test Problem (5).

Figure 8. Pareto front for the Test Problem (6).

Figure 9. Pareto front for the Test Problem (7).

Figure 10. Pareto front for the Test Problem (8).

Figure 11. Pareto front for the Test Problem (9).

As it can be seen from the graphs for the test problem 8 (Fig. 10), our approach displayed a better distribution of the Pareto optimal points and there are no gaps between the nondominated solutions which Making the curve is smooth. For the test problem 9 (Fig. 11), it can be seen that our approach gave a good sampling of points at the mid-section of the curve and found a few points in the rest of the curve.

Generally we can say that the results have demonstrated that the proposed algorithm can successfully find the Pareto optimal for all test problems except test problem 9; where it’s Pareto is nonconvex in the objective space. As we know that the classical techniques aim to give a single point (solution) at each run of problem solving but, the proposed approach generates the set of Pareto optimal solution, which provides the facility to save computing time.

There are usually two important aspects of MOO performance. One is the spread across the Pareto optimal front and the other is the ability to attain the global optimum or final tradeoffs. Every MO optimizer should have the ability of exploration and exploitation to achieve these two goal simultaneously. There are several metrics to express these two aspects with a quantitative assessment.

To evaluate the proposed algorithm, the Generational Distance (GD) criterion is used [6]. When the optimal Pareto set is known, GD is a way of estimating how far are the elements in the set of nondominated vectors found so far from those in the Pareto optimal set and is defined as follows:


where, Nv is the number of vectors in the set of nondominated solutions found so far and di is the Euclidean distance between each of these and the nearest member of the Pareto optimal set. If all the solution candidates are in the Pareto optimal set, then the value of GD is 0.

Table 2 shows the GD criterion for the nine test problems. In Table 2, we can see that GD for the first 8 problems is very small that mean the approximate Pareto obtained by the proposed approach is very near to the true Pareto solution. On the other hand, for test problem (9) we can see that GD criterion is greater than the other test problems. This is due to that this problem has nonconvex Pareto solution.

Table 2. The GD criterion for test problems.

Test problem Generational Distance (GD)
Test problem (1) 0.00010458
Test problem (2) 0.00655497
Test problem (3) 0.00569784
Test problem (4) 0.00549784
Test problem (5) 0.00012578
Test problem (6) 0.00048679
Test problem (7) 0.00026457
Test problem (8) 0.00075481
Test problem (9) 0.01587945

8. Conclusion

This study presents a hybrid algorithm combining TR and PSO for solving MOOPs. It is a new algorithm that performs random searching with deterministic searching and integrates the merits of both TR and PSO. In the proposed algorithm, MOOP converting to SOOP, TR is used to obtain a point on the Pareto frontier and homogeneous PSO with a dynamic constriction factor is applied to get all the points on the Pareto frontier. Various kinds of MO benchmark problems showed the effectiveness of the new algorithm and illustrate the successful result in finding a Pareto optimal set. The following are the significant contributions.

Ÿ   The present work addressed an important task of combining TR with PSO to not find a single optimal solution, but to find a set of nondominated solutions

Ÿ   Using the randomicity PSO and the high efficiency of TR method, can overcome the limitation of TR method and solve efficiently a class of MOOPs

Ÿ   The proposed algorithm does not have any restrictions on the number of the Pareto optimal solutions found; where it keeps track of all the feasible solutions found during the optimization

Ÿ   The proposed approach can be solve nonconvex MOOPs but cannot be generate all Pareto points on the frontier

Ÿ   The numerical results reveal that the proposed approach can generate well-distributed sets of Pareto points very efficiently and is thus very suitable for engineering MOOPs and has good application value

Ÿ   Using the GD criterion show that the proposed algorithm give good approximation of the Pareto optimal solution.

Ÿ   When the initial repository has less than 3 Pareto solutions, the good result couldn't be expected and If the initial Pareto solutions saved in repository have good diversity, then this algorithm have a better results

Further research will concentrate on the possibilities to extend the proposed technique to deal more nonconvex MOOPs by using another methods for converting MOOP to SOOP.


  1. Y. Ou, "A hybrid trust region algorithm for unconstrained optimization," Applied Numerical Mathem., vol. 61, pp. 900–909, 2011.
  2. M. Ahookhosh, K. Amini and M.R. Peyghami, "A nonmonotone trust-region line search method for large-scale unconstrained optimization." Applied Mathem. Modell., vol. 36, pp. 478–487, 2012.
  3. M. Ahookhosh, and K. Amini, "A Nonmonotone trust region method with adaptive radius for unconstrained optimization problems," Comput Mathem. Applications, vol. 60, pp. 411–422, 2010.
  4. J. Zhang, K. Zhang and S. Qu, "A nonmonotone adaptive trust region method for unconstrained optimization based on conic model," Applied Mathem. Computation, vol. 217, pp. 4265–4273, 2010.
  5. B. El-Sobky, "A multiplier active-set trust-region algorithm for solving constrained optimization problem," Applied Mathem. and Computation, vol. 219, pp. 928–946, 2012.
  6. S. Kim, and J. Ryu, "A trust-region algorithm for bi-objective stochastic optimization," Procedia Comput. Sci., vol. 4, pp. 1422–1430, 2011.
  7. A.A. El-Sawy, Z.M. Hendawy, M.A. El-Shorbagy, "Trust-Region Algorithm based local search for Multi-objective Optimization", IEEE 1st International Conference on Innovative Engineering Systems (IEEE-RAS ICIES2012), Alexandria, Egypt, December, 7-9, 2012.
  8. J. Kennedy, R.C. Eberhart and Y. Shi, "Swarm Intelligence," Morgan Kaufmann, 2001.
  9. K.E. Parsopoulos, and M.N. Vrahatis, "Particle swarm optimization method in multiobjective problems," Proceedings of the ACM 2002 Symposium on Applied Computing, pp. 603–607, 2002.
  10. A.A. Mousa, M.A. El-Shorbagy and W.F. Abd-El-Wahed, "Local search based hybrid particle swarm optimization algorithm for multiobjective optimization," Swarm Evolutionary Computation, vol. 3, pp. 1–14, 2012.
  11. M.R. Sierra, and C.C. Coello, "Multi-objective particle swarm optimizers: a survey of the state-of-the-art," Int. J. Computational Intell. Res., vol. 2, pp. 287–308, 2006.
  12. L. Tang, "A Hybrid Multiobjective Evolutionary Algorithm for Multiobjective Optimization Problems," IEEE Transactions on Evolutionary Computation, vol. 17, pp. 20–45, 2013.
  13. C.A. Coello, D.A.V. Veldhuizen, G.B. Lamount, "Evolutionary Algorithms for Solving Multi-Objective Problems," Kluwer Academic Publishers, 2001.
  14. T. Friedrich, T. Kroeger, and F. Neumann, "Weighted preferences in evolutionary multi-objective optimization," Int. J. Mach. Learn. & Cyber, vol. 4, pp. 139–148, 2013.
  15. J. Dennis, M. El-Alem and K. Williamson, "A trust-region approach to nonlinear systems of equalities and inequalities," SIAM J. Optimization, vol. 9, pp. 291–315, 1999.
  16. W.F. Abd-El-Wahed, A.A. Mousa and M.A. El-Shorbagy, "Integrating particle swarm optimization with genetic algorithms for solving nonlinear optimization problems," J. Computational Applied Mathem., vol. 235, pp. 1446–1453, 2011.
  17. K. Deb, "Multi-objective Using Evolutionary Algorithms," 1 st ed., John Wiley & Sons, LTD, New York, 2001.

Article Tools
Follow on us
Science Publishing Group
NEW YORK, NY 10018
Tel: (001)347-688-8931