Application of the Plant Propagation Algorithm and NSGA-II to Multiple Objective Linear Programming

: Multiple Objective Linear Programming (MOLP) problems are usually solved by exact methods. However, nature-inspired population based stochastic algorithms such as the plant propagation algorithm are becoming more and more prominent. This paper applies the multiple objective plant propagation algorithm (MOPPA) and nondominated sorting genetic algorithm II (NSGA-II) for the ﬁrst time to MOLP and compares their outcomes with those of prominent exact methods. Computational results from a collection of 51 existing MOLP instances suggests that MOPPA compares favourably with four of the most prominent exact methods namely extended multiple objective simplex algorithm (EMSA), afﬁne scaling interior MOLP algorithm (ASIMOLP), Benson’s outer-approximation algorithm (BOA) and parametric simplex algorithm (PSA)


Introduction
We stated in [60] that multiple objective linear programming (MOLP) is a branch of Multiple Criteria Decision Making (MCDM) that seeks to optimize two or more linear objective functions subject to a set of linear constraints.MOLP has been an active area of research since the 1960s because of its relevance in practice, [27].Indeed, many real world decision making problems involves more than one objective function and can be formulated as MOLP problems.Consequently, MOLP has been widely applied in many fields and has become a useful tool in decision making, [65].Formally, it can be written as where c 1 , ..., c q are n-vectors containing the coefficients of the multiple objective functions, A is an m × n constraint matrix and b is the right hand side vector.
We also noted in [60] that in practice, MOLP is typically solved by the Decision Maker (DM) with the support of the analyst who look for a most preferred (best) solution in the feasible region X.This is because optimizing all of the objective functions simultaneously is often not possible due to their conflicting nature, [61].Consequently, the concept of optimality is replaced with that of efficiency.
The purpose of MOLP is to obtain either all the efficient or nondominated points or a subset of either, or a most preferred point depending on the purpose for which it is needed.
In the last six decades, a number of exact methods have been introduced for solving the problem.Some of them have proven to be effective on small and medium scale MOLP instances.
Real life MOLP problems are difficult to solve.In a worstcase situation all vertices might be efficient, meaning that the problem would be intractable as there might be exponentially many efficient vertices, [47].It is, therefore, clear that MOLP is intractable in the worst-case.Moreover, looking at [46] which shows that listing all vertices of a polyhedron is NP-hard, one can deduce that MOLP is also NP-hard since in the worst-case scenario all vertices must be found to determine the efficient ones.Thus, exact methods are sometimes inefficient and costly, especially when the problem size is large.Instead of finding efficient and nondominated points, heuristics generally find good approximations or near efficient solutions in acceptable computational times.For this reason, they are widely used in multi-objective optimisation (MOO).Given that finding all efficient solutions of MOLP is NP-hard since it is equivalent to enumerating all vertices of the feasible region [13], it is astonishing to note that only one approximate method, namely NSGA [78], has been applied to it [14].It is worth investigating these methods as an alternative solution approach to the problem.
This paper applies nature-inspired population based stochastic algorithms namely multiple objective strawberry plant propagation algorithm (MOPPA) [33] and nondominated sorting genetic algorithm II (NSGA-II) [19,20] which were originally meant to solve multiple objective non-linear programming problems to solve MOLP problems using the penalty function method to handle general constraints.The paper then compares the outcomes of these two approximate methods with those of EMSA [59], ASIMOLP [5], BOA [10] and PSA [68] which are prominent exact methods.Note that EMSA [59] is an extension of the multiple objective simplex algorithm of Evans and Steuer [31] to generate the set of all nondominated points of the problem.In order to apply MOPPA and NSGA-II to MOLP, we will reformulate (1) to penalize each of the objective function for any constraint that is violated, and to achieve the comparison we shall compute a Most Preferred Nondominated Point (MPNP) whose components are as close as possible to an unattainable ideal objective point from the nondominated set returned by exact methods to compare with the Best Nondominated Point (BNP) returned by MOPPA and NSGA-II.
To the best of our knowledge, no application of MOPPA and NSGA-II to MOLP or their comparison with exact methods has been conducted before.We intend to fill this gap here.
This paper is organized as follows.Section 2 introduces MOLP and basic notation.Section 3 is a brief review of the relevant literature.Section 4 discusses heuristic approaches to multi-objective optimisation.We present the strawberry plant in Section 5. Section 6 presents the basic plant propagation algorithm.The solution procedure is presented in Section 7. We illustrated MOPPA and NSGA-II in Section 8. Section 9 discusses the determination of MPNPs in exact methods.Section 10 presents experimental results obtained with approximate and exact methods.Summary of results is presented in Section 11.Finally, a conclusion is presented in Section 12.

Notation and Definitions
We also stated in [60] that an alternative and compact formulation of ( 1) is as follows min Cx where C is a q × n criterion matrix consisting of the rows c T k , k = 1, 2, ..., q, A and b are as described earlier.The feasible set in the decision space is X = {x ∈ R n : Ax = b, x ≥ 0} and in the objective space it is Y = {Cx : x ∈ X} .The set Y is also referred to as the image of X, [58] A nondominated point in the objective space is the image of an efficient solution in the decision space; nondominated points form the nondominated set, [60].
An efficient solution of an MOLP problem is a solution that cannot improve any of the objective functions without deteriorating at least one of the other objectives.A weakly efficient solution is one that cannot improve all the objective functions simultaneously, [60].Let x ∈ X be a feasible solution of (2) and let ŷ = C x: x is called weakly efficient if there is no x ∈ X such that Cx < C x; and ŷ = C x is called weakly nondominated [26].The set of all efficient solutions and the set of all weakly efficient solutions of (2) are denoted by X E and X W E respectively.The sets are the nondominated and weakly nondominated sets in the objective space of (2) respectively, [60].The nondominated faces in the objective space of a given problem constitutes the nondominated frontier and the efficient faces in the decision space of the problem constitutes the efficient frontier.The ideal objective point y * is the minimum criterion values over the efficient set X E .The ideal objective values are easy to obtain by simply minimizing each objective function individually over the feasible region X [2].

Literature Review
Several exact methods have been introduced in the last six decades for computing either the entire efficient decision set X E or the nondominated set Y N or a subset thereof, or a most preferred solution to the problem.
We stated in [60] that Eiselt and Sandblom [29] note that, Evans and Steuer [31], Philip [63], and Zeleny [90] derived generalized versions of the simplex method known as MSA.That of Philip [63] first determines if an extreme point is efficient and subsequently checks if it is the only one that exists.If not, the algorithm finds them all.This MSA approach, however, may fail at a degenerate vertex.In Philip [64], it was modified to overcome this difficulty, [60].
The MSA of Evans and Steuer [31] also generates all the efficient extreme points and unbounded efficient edges of MOLPs; see also Algorithm 7.1, on page 178 of Ehrgott [26].The algorithm first establishes that the problem is feasible and has efficient solutions.Thereafter, it generates all of them.An LP test problem is solved to determine the pivots that lead to efficient vertices and the algorithm is implemented as a software called ADBASE in [80].
As was also noted in [60] that the MSA variant of Zeleny [90] also uses an LP test problem to determine the efficiency of extreme points.But, here, vertices are tested for efficiency after they have been obtained unlike in [31] where the test problem determines pivots leading to efficient vertices.
In [85,87] Yu and Zeleny used the method in [90] to generate the set of all efficient solutions and presented a formal procedure for testing the efficiency of extreme points.The efficient extreme points are derived from the efficient faces, in a top-to-down search strategy.Numerical examples using problems with three objectives were used to demonstrate the effectiveness of the method.In a similar paper, Yu and Zeleny [86] applied their approach expanded in [87] to parametric linear programming.Two basic forms of the problem and two computational procedures for computing the efficient set were presented: the direct decomposition of the parametric space into subspaces associated with extreme points and the indirect algebraic method.Numerical illustration show that, the indirect algebraic approach is superior to the direct decomposition.
We also noted in [60] that Isermann [41] proposed a variant of the MSA of Evans and Steuer [31] that solves fewer LPs when determining the entering variables.The algorithm first establishes whether an efficient solution for the problem exists, and solves a test problem to determine pivots leading to efficient vertices.It was implemented as a software called EFFACET in [40].
In [35], Gal presented an MSA that computes the set of all efficient solutions and higher-dimensional faces of the problem.This method is meant to address the problem of determining efficient faces and higher dimensional faces not resolved in [31] and [63].Here, efficient solutions are computed using a test problem.The algorithm also determines higher-dimensional efficient faces for degenerate problems which were only discussed in [41] and [90] but were not solved.The efficient faces are generated in a bottom-to-top search strategy as we noted in [60].
Steuer [79] used the MSA of Evans and Steuer [31] to solve parametric and non-parametric MOLP problems.Different methods for obtaining an initial efficient extreme point as well as different LP test problems were also presented.Efficient extreme points are generated through the decomposition of the weight space into finite subsets that provide optimal weights corresponding to extreme point solutions, [60].Similary, Ehrgott [26] also used the MSA of Evans and Steuer [31] to solve MOLP problem instances with two and three objective functions, [58].
In [24] was proposed a variation on the algorithm of Evans and Steuer [31].The authors noted that algorithms usually start from an initial efficient extreme point and moved to an adjacent one following the solution of an LP problem.The proposed method does not require the solution of any LP problem to test for the efficiency of extreme points and the feasible region needs not be bounded.The algorithm enumerates all efficient extreme points and appears to have computational advantage over other methods.
We stated in [60] that Ecker et al. [23] presented yet another variant of MSA.The algorithm first determines the maximal efficient faces incident to a given efficient vertex (i.e. containing the efficient vertex) and ensures that previously generated efficient faces are not regenerated.This is done following a bottom-to-top search strategy as in [35], which dramatically improves computation time.The proposed approach was illustrated with a degenerate example given in [87], to demonstrate its applicability.It was computationally more efficient than the method in [87].
We also noted in [60] that the MSA of Armand and Malivert [7] determines the set of efficient extreme points even for degenerate MOLPs.The approach follows a bottom-to-top search strategy and utilizes a lexicographic selection rule to choose the leaving variables which proves effective when solving degenerate problems.It was tested successfully on a number of degenerate problems.A numerical example with five objectives and eight constraints which was solved in [87] was also used to demonstrate its effectiveness.The proposed MSA was superior to that in [87].
In [58], we stated that a modification of the PSA for single objective LP to solve bounded bicriterion LP problems was presented in [69].The approach was applied to a large mean-risk portfolio optimization problem for which the nondominated portfolios were generated.
Ehrgott et al. [25] presented a primal-dual simplex algorithm for bounded MOLP problems.This algorithm finds a subset of efficient solutions that are enough to generate the whole efficient frontier.The algorithm starts with a coarse partitioning of the weight space which continues in each iteration as well as solves an expensive LP in each iteration.A vertex enumeration is then performed in the last step to obtain efficient solutions.Numerical illustrations show the applicability of the algorithm.
In [68] Rudloff et al. introduced a PSA for the problem.The algorithm is a generalization of the algorithm in [69] and is similar to that of Ehrgott et al. [25].It works for any dimension, solves bounded and unbounded problems (unlike that in [25] and [69]), and does not find all the efficient solutions just like that in [25].Instead, it finds a solution based on the idea of Löhne [49], i.e. a subset of efficient extreme points and directions that allows to generate the whole efficient frontier.This is the so called PSA.The algorithm does pivoting for just one leaving variable among the set of all possible leaving variables.It was compared with a version of BOA in [37] and MSA of Evans and Steuer [31] using small MOLP instances which were randomly generated with 3 and 4 objectives and up to 50 variables and constraints.The numerical results show that the proposed algorithm is superior to BOA for non-degenerate problems.However, BOA is better for highly degenerate problems.PSA was also found to be computationally more efficient than the algorithm of Evans and Steuer [31].
In [59], we presented an extension of the MSA of Evans and Steuer [31] to generate the set of all nondominated points of the problem and no redundant ones.This extended version was compared to the primal variant of BOA [10] that also computes the set of all nondominated points of the problem.Numerical results on a collection of 56 existing MOLP instances show that the total number of nondominated points returned by EMSA is the same as that returned by BOA for most of the problems considered.
Of all the MSA variants discussed so far, it was noted in [73] that, the MSA of Evans and Steuer [31] is the most popular and successful for computing all efficient extreme points of the problem.
We noted in [60] that MSA and its variants make explicit use of the vertices of the feasible region while interior-point approaches on the other hand, generate iterates in the interior of the feasible region.Various such approaches have been suggested for the problem.The difference between them depends on the methodology employed to assess the suitability of points used to derive a combined search direction along which one heads towards the next iterate, [60].
As we also stated in [60] that the first to adapt a variant of Karmarka [45] interior-point algorithm, to solve MOLP seems to be Abhyankar [1].The author relies on the method of centers and utilizes a parameterization of ellipsoids in the ndimensional space to approximate the efficient frontier of the problem in polynomial time.
In [4], Arbel modified and adapted a variant of Karmarka [45] algorithm resulting in the so called Affine Scaling Interior MOLP (ASIMOLP) algorithm.He used the convex combination of individual directions to derive a combined direction along which to step toward the next iterate.Specifically, the algorithm generates step direction vectors based on the objectives of the problem.
The relative preference of these directions is then assessed using a utility (or preference) function to obtain the points used in combining them into a single direction vector that moves the current iterate to a new one.The process is repeated until the algorithm converges to a most preferred efficient solution after meeting some termination conditions, [60].
In [5] was proposed yet another ASIMOLP algorithm.This approach offers another means of assessing preference information to establish a combined search direction rather than using the DM's utility function.The Analytic Hierarchy Process (AHP) developed in [70] was applied to obtain the relative preference of points used to derive a combined direction along which the next step is taken.It is based on the assessed preferences to weigh the step direction vectors for each of the objectives in order to derive a combined step direction vector.This process continues to generate search directions and new feasible points at each iteration, until the algorithm converges to a most preferred point on the efficient frontier, [60].
We noted in [60] that another ASIMOLP algorithm based on the AHP was introduced in [6].Here, the derived preference information is applied to the projected gradients in order to obtain anchoring points and cones used in searching for a most preferred solution.The boundaries of the constraints polytope are constantly probed to make more directions available, which enables one to arrive at a most preferred solution.
Wen and Weng [82] modified the ASIMOLP algorithm in [4] in order to resolve zigzagging issues.However, the modified algorithm may not yield a most preferred efficient solution.
Lin et al. [48] also proposed a modification of the algorithm in [4].They adopted the utility function trade-off method to weigh the objective functions involved and compared the modified algorithm with that in [82] and the simplex method.Numerical experiments show that their algorithm is superior.On computing efficiency, the interior point based algorithms outperform the simplex-based ones on large scale problems, [60].
In [83], Weng and Wen introduced yet another ASIMOLP based algorithm.
The proposed algorithm computes a weighted sum of the different search directions involved using a utility function.These search directions are then normalized with the weights to obtain a combined direction that moves the current solution to an anchor point.Computational experiments show that the proposed algorithm is suitable for solving large scale instances.
We noted in [58] and [60] that, due to the various difficulties arising from solving MOLP problems in the decision space (such as having different efficient solutions that map onto the same point in the objective space), efforts were made to look at the possibility of solving them in the objective space.See also Dauer [16], Dauer and Liu [17], Dauer and Saleh [18] and Benson [10].
In [59] we stated that, Benson [10] presented a detailed account of decision set based methods and proposed an algorithm for generating the set of all nondominated points in the objective space.This is the so called BOA.According to him, this algorithm is the first of its kind.Computational results suggest that the objective space based methods are better than the decision space based ones.A further analysis of the objective space based method for the problem was presented in [11].Here, the outer approximation algorithm also generates the set of all weakly nondominated points, thereby enhancing the usefulness of the algorithm as a decision aid, [60].
Benson [12] suggested yet another algorithm for solving the problem in the objective space.This time a hybrid method that partitions the objective space into simplices that lie in each face so as to compute the set of nondominated points.This idea was earlier presented in [9].The algorithm is quite similar to that in [10].The difference between them is in the manner in which the nondominated vertices are found.While a vertex enumeration procedure is employed in [10], a simplicial partitioning technique is used in the latter, [60].
In [75], Shao and Ehrgott modified the algorithm of Benson [10].While in [10], a bisection method that requires the solution of several LPs in one step is required, here, solving just one LP achieves the desired result and in the process improves computation time.Again in [76], Shao and Ehrgott introduced an approximate dual variant of the algorithm of Benson [10] for obtaining approximate nondominated points of the problem.The proposed algorithm was applied to the beam intensity optimization problem of radio therapy treatment planning for which approximate nondominated points were generated.Numerical application suggests that the method is faster than solving the primal directly.
We noted in [60] that the explicit form of the algorithm of Benson [10] as modified by Shao and Ehrgott [75] is presented in [49].This version solves two LPs in each iteration during the process of obtaining the nondominated extreme points.In [50] Löhne presented a Matlab implementation of this modified version called BENSOLVE-1.2, for computing the entire set of nondominated points and directions (unbounded nondominated edges) of the problem.
Csirmaz [15] also modified and introduced an improved version of the algorithm of Benson [10] that solves one LP and a vertex enumeration problem in each iteration.While in Benson [10], solving two LPs to determine a unique boundary point and a supporting hyperplane of the image is required in two steps of the algorithm, here, the two steps are merged into one and solving just one LP does both tasks, thereby improving computation time.The algorithm was used to generate all the nondominated vertices of the polytope defined by a set of Shannon inequalities on four random variables so as to map their entropy region.Numerical results show the applicability of the method to medium and large instances, [60].
In [37], Hamel et al. presented new variants and extensions of the algorithm of Benson [10].The primal and dual variants of the algorithm that solves only one LP problem in each iteration.Numerical testing reveal a reduction in computation time.
We noted in [58] that Löhne et al. [51] also extended the primal and dual variants of the algorithm of Benson [10] to approximately solve convex vector optimization problems in the objective space.
More recently Dörfler, D. et al. [22] presented an algorithm for approximately solving bounded vector optimization problems in the objective space.The proposed algorithm is an improvement and a modification of the algorithm in [37] where a new selection rule for vertex enumeration is presented to improve the over all efficiency of the algorithm.Numerical illustrations suggests that the new algorithm may be faster than that in [37].
Having discussed exact methods to the problem, we now turn our attention to heuristics or approximate approaches that seek to find good approximations or near efficient or nondominated points in acceptable computational times.As stated earlier, heuristics or approximate methods have been commonly applied to nonlinear and discrete multi-objective optimisation and not so much to MOLP.
In [14], Chakraborty and Ray applied multi-objective parametric fuzzy programming and NSGA [78] to MOLP transportation problem.
Here, the MOLP problem is transformed into a single objective parametric problem with interval parameters.Numerical illustration using a coal energy resource allocation problem show the applicability of the method.NSGA has been widely applied in different discrete and continuous multi-objective optimisation problems.In [8], Bagchi applied NSGA to several scheduling problems for which efficient solutions were found.For extensive applications of NSGA to chemical engineering problems, see [57].NSGA-II [19,20] which is an improved version of NSGA [78] and arguably the most popular in the context of nonlinear multi-objective problems has also had tremendous applications in different nonlinear multi-objective optimisation.It was successfully applied in the energy generation expansion planning problem in [44] for which the minimum investment and outage costs were approximated.Similarly, Hu et al. [39] also applied NSGA-II to solve a real-life combined gas and electricity network expansion planning problem in Hainan province (China) with an aim of minimizing investment, production and carbon emission costs.The problem was formulated as a bicriterion nonlinear multi-objective optimisation problem and solved using NSGA-II for which the nondominated front was determined.In [55], Massobrio et al. applied NSGA-II to the taxi sharing problem in order to determine the minimum cost of journey and delay time by passengers from the same location to different destinations.The problem was formulated as a bicriteria multi-objective optimisation problem and solved with NSGA-II and greedy heuristics.Numerical results show that NSGA-II outperform the greedy algorithms by achieving significant improvements in both objectives in acceptable computational time.Recently, NSGA-II was applied in the communication industry to solve the spectrum assignment problem in [54].
Here, the spectrum assignment problem was also formulated as a bicriterion multi-objective optimisation problem and solved using NSGA-II.It was observed from the results obtained that there is an improvement in throughput at the cost of spectral efficiency which offers useful guidelines to the service provider to maintain customer satisfaction in the spectrum sharing network.In [21], a modification of NSGA-II [19,20] was presented and applied to solve the combined economic and emission dispatch problem.The authors noted, however, that NSGA-II ensures diversity along the nondominated front using the concept of crowding distance, but lateral diversity may be lost due to the lack of diversity in a particular decision variable which may push the search towards the nondominated front.The modified version is aimed at resolving this issue by incorporating controlled elitism into NSGA-II and replacing the crowding distance operator with a dynamic version which appears to solve the problem.
Salhi and Fraga [71] presented a plant propagation algorithm (PPA) to solve the survival optimisation problem.This is the so called PPA.The algorithm emulates the way plants and in particular, the strawberry plant propagate by sending many short runners when they are in good spots and sending fewer but longer runners to explore the environment when they are in not so good spots.It was tested on a complex nonlinear process design problem and compared with the Nelder-Mead algorithm.Experimental results show the effectiveness of the proposed method and it performs significantly better than the Nelder-Mead search method.In [33], Fraga and Amusat extended the PPA in [71] to solve multi-objective nonlinear programming problems.A novel fitness function that emphasizes the end-points is introduced into the extended version and applied to the integrated energy systems design for off-grid mining operations problem for which good designs that achieve the desired objectives were approximated.Recently, Rodman et al. [67] applied the extended multi-objective PPA (MOPPA) to the industrial beer fermentation process in order to minimize the production time and maximize ethanol production.The problem was modelled as a bicriteria nonlinear dynamic optimisation problem and solved with MOPPA.Numerical results show the effectiveness of MOPPA in solving complex multi-objective optimisation problems.Some well-known heuristic methods to multiobjective optimisation will be discussed in the following section.

Heuristic Approaches to
Multi-objective Optimisation

Genetic Algorithm
The Genetic Algorithm (GA) was developed by Holland in 1975, [38].It is based on the idea of natural selection.The algorithm works with three operators which are referred to as genetic operators namely; crossover, mutation and reproduction.

Crossover
The crossover operator selects a random point which shows a position on the individual.Then, parts of two selected individuals are exchanged to generate two new individuals.This procedure is called a single-point crossover.Another type is called two-point crossover.In this variant, two random positions are selected and parts of parents are exchanged.

Mutation
A predetermined number of individuals are mutated.This is done by changing/flipping some of the entries of an individual.This operator helps exploration of the search space.

Reproduction
This copies good individuals into the new population as they are.
It was noted in [74] that to implement GA the following are needed: 1. Initial Population: A predetermined number of individuals is randomly generated to form an initial population.The basic GA starts with this population.

Stopping Criteria
The algorithm stops when the number of generations reaches a predetermined maximum number of generations.Another commonly used stopping criterion is the maximum number of generations without improvement in the current solutions, [38,74].

Vector Evaluated Genetic Algorithm (VEGA)
VEGA is the first population-based evolutionary multiobjective genetic algorithm (MOGA) applied to multiobjective optimisation problems.It was introduced by Schaffer [72].Here, the population is divided randomly into equal sub-populations at each iteration.Fitness values are assigned to all the solutions in a sub-population based on one of the objective functions and each objective is used to evaluate members in the population.In each sub-population, a fitness proportionate selection is done and the selected members are used for procreation.The process is repeated until convergence is achieved.

Multi-Objective Genetic Algorithm (MOGA)
MOGA is the first population-based evolutionary algorithm that uses the nondominated classification of the population.In MOGA, each solution is checked for its domination in the population and a rank i, equal to n i the number of solutions that dominates solution i, is assigned to it.To ensure that diversity is achieved, the algorithm uses a sharing function model, [36].

Nondominated Sorting Genetic Algorithm (NSGA)
NSGA is one of the multi-objective evolutionary algorithms (MOEA) which has the capacity to find nondominated points in a single run.It was introduced by Srinivas and Deb [78].
In NSGA the population is sorted according to nondomination and classified into a number of fronts (F 1 , F 2 , ..., F n ).Using niching and nondominated sorting of solutions in every generation, the good solutions are selected for procreation.The algorithm also uses a sharing function model to ensure diversity.Its main issues are: It requires the potential user to specify the sharing parameter, which is difficult for the user to determine the ideal value; the nondominated sorting technique is time consuming and computationally expensive; it lacks elitism, which may be important in preventing the loss of good solutions once they are found, [88].

Nondominated Sorting Genetic Algorithm II (NSGA-II)
NSGA-II [19,20] is an improved version of NSGA [78].Though NSGA enjoyed patronage in the multi-objective evolutionary community, it was also widely criticized for the above three issues (lack of elitism, high computational cost of nondominated sorting and the requirement for specifying the sharing parameter).The NSGA-II succeeded in solving all the three issues at once by introducing a fast nondominated sorting and tournament selection using the concept of crowding distance, [62].In NSGA-II, in addition to the genetic operators of crossover and mutation, two new specialized multi-objective operators or mechanisms have been proposed to solve the above three issues: 1. Nondominated Sorting: NSGA-II employs a fast nondominated sorting that is aimed at reducing the complexity of sorting as compared to that used in NSGA. 2. The Crowding Distance: It is a technique to replace the sharing parameter that was needed in the old version.This approach involves ranking among members of a front those that are dominating or being dominated by each other.These two procedures are used together with the genetic selection operators to create the population of the next generation.The pseudo-code of NSGA-II adapted from [88] is given as Algorithm 1.
Algorithm 1 Nondominated Sorting Genetic Algorithm II [88] 1: Initialization: 2: Generate random population 3: Evaluate objective values 4: Assign rank (level) based on nondomination 5: Generate child population -Tournament selection -Crossover and mutation For i = 1 to number of generations 6: Parent and child population are assigned rank based on nondomination 7: Generate sets of nondominated fronts 8: Determine the crowding distance between points on each front 9: Select points based on crowding distance calculation and fill into the parent population until full 10: Create next generation 11: Tournament Selection 12: Crossover and Mutation 13: Evaluate Objective Values 14: Increment generation index End Among all the above mentioned MOEAs, NSGA-II is the most popular and known for its capacity to promote the quality of solutions, [43].There are new nature-inspired population based stochastic algorithms which have shown a lot of promise on nonlinear single and multi-objective optimisation problems.One such algorithm is the so called plant propagation algorithm or PPA.It emulates the way plants and in particular the strawberry plant propagate, [71].The details of PPA will be provided in Section 6 where it is investigated and used to solve MOLP.
Based on our extensive review of the topic, it was observed that no application of MOPPA and NSGA-II to MOLP has been carried out before, and no comparison of a MPNP chosen from exact methods with the BNP returned by MOPPA and NSGA-II has been carried out.We intend to fill this gaps here.

The Strawberry Plant
The strawberry plant (Fragaria Xananassa) belongs to the Rose family.The strawberry-growing industry started in Paris in the seventeenth century with the European variety.In 1714, Amedee-Francois Frezier, a mathematician and engineer, hired by Louise XIV [30] to draw maps of South America returned from Chile with some Chilean strawberry plants which give a larger fruit.Subsequent crossings with the European variety and selections led to the modern plant, [71].
Looking at mature strawberry plants, one will observe after a period of time, a concentration of younger plants around strong and well-established ones; that is the plants send many short runners as they are in good spots.Plants that are not well-established and are not looking very strong, send few but longer runners to explore the environment in search of better spots with enough water, nutrients and sunlight.These basic principles are behind the design of PPA and subsequently MOPPA.

The Basic Plant Propagation Algorithm
The Plant Propagation Algorithm (PPA) introduced by Salhi and Fraga [71] emulates the way plants and in particular the strawberry plant propagate.That is, it emulates the strategy that plants deploy to survive by colonising new places which have good conditions for growth.Plants, like animals, survive by overcoming adverse conditions using often basic but effective strategies.The strawberry plant, for instance, has a survival and expansion strategy which is to send short runners to exploit the local area if the latter has good conditions (with enough water, nutrients and light), and to send long runners to explore new and more remote areas, that is to run away from a not so favourable current area (with poor water supply, nutrients and light).PPA in its multi-objective version has not been applied to MOLP, yet.We intend to do that here.MOPPA [33] is the implemented version.The mechanism of the basic PPA is described below, [74,81].
Algorithm 2 The Plant Propagation Algorithm (PPA) [71] 1: Generate a population P = X i , i = 1, . . ., N P of plants; 2: g ← 1 3: for g = 1 : g max 4: Compute Sort P in ascending order of fitness values N (for minimization); 6: Create new population φ 7: for each X i , i = 1, . . ., N P 8: r i ← set of runners where both the size of the set and the distance for each runner (individually) are proportional to the fitness values φ ← Φ ∪ r i (append to population); 10: endfor 11: P ← φ (new population); 12: endfor 13: Return P , (the population of solutions).
The algorithm starts with a population of plants each of which represents a solution in the search space.X i denotes the solution represented by plant i in an n-dimensional space.X i ∈ R n , i.e.X i = [x ij ], for j = 1, . . ., n and x ij ∈ R. N P is the population size.This iterative process stops when g the counter of generations reaches its given maximum value g max .Individuals/plants/solutions are evaluated and then ranked (sorted in ascending or descending order) according to their objective (fitness) values and whether the problem is a min or a max problem.The number of runners of a plant is proportional to its objective value and conversely, the length of each runner is inversely proportional to the objective value, [71].For each X i , N i ∈ (0, 1) denotes the normalized objective function value space.The number of runners for each plant to generate is where n i r shows the number of runners and β i ∈ (0, 1) is a randomly picked number.For each plant, the minimum number of runners is set to 1.The distance value found for each runner is denoted by dx i j .It is: where r ∈ [0, 1] is a randomly chosen value.Calculated distance values are used to position the new plants as follows: where y ij shows the position of the new plant and [a j , b j ] are the bounds of the search space.If the bounds of the search domain are violated, the point is adjusted to be within the domain [a j , b j ].The new population that is created by appending the new solutions to the current population is sorted.In order to keep the number of population constant, the solutions that have lower objective value are dropped, [74].The algorithm was originally designed for single-objective nonlinear optimisation problems.It has been successfully tested on single-objective and bicriteria continuous optimisation problems in [71].It has also been applied successfully to a single-objective dynamic optimisation problem in the built environment [34].Recently, the algorithm was equipped with a new fitness function that emphasizes the end-points and extended to multi-objective nonlinear programming problems in [33].This version was successfully applied to the integrated energy systems design for off-grid mining operations which is a bicriteria dynamic optimisation problem.Most recently, the algorithm was also applied to the industrial beer fermentation process in [67] for which the nondominated front was successfully approximated.Given the successes of the algorithm recorded so far for single and multi-objective nonlinear programming problems, we intend to apply MOPPA to MOLP problems.The pseudocode of MOPPA adapted from [33] is given as Algorithm 3.
Algorithm 3 The Multi-Objective Plant Propagation Algorithm, [33] 0: Given: f (x), a vector f unction; n g , number of generations to perf orm; n p , the propagation size; n r , maximum number of runners to propagate.

Solution Procedure
In order to apply MOPPA to MOLP, we use the penalty function method [56] to handle the constraints.
The penalty function method is the most popular constraint handling technique in evolutionary algorithms and many other optimisation frameworks [56,84].It penalizes each objective or fitness function by reducing its fitness values in proportion to the degree of constraint violation [77].In other words, a penalty term is added to each of the objective function penalizing the function values that are not in the feasible region.To use this method, MOLP problem (1) is reformulated as follows: where X is the search space or feasible region which is described by box constraints, a and b are the lower and upper bounds on all variables, the scalar quantity K is a constant which is called the penalty parameter and the function p k (x), k = 1, ..., q is the penalty function.Equation ( 6) is now our new MOLP penalty program.The penalty function p k (x) satisfies the following 1. p k (x) = 0, if g j (x) ≤ 0 2. p k (x) > 0, if g j (x) 0, that is to say, the penalty function is zero if no violation of the constraint occurs and is positive if a constraint is violated; the penalty parameter term would be added to the objective function such that the solution is pushed back towards the feasible region.A large penalty value prevents searching the infeasible region and enables the method to converge to a feasible solution quickly, [84].

Illustration of MOPPA
We consider the following MOLP adapted from [42].We implemented the MOLP penalty program 6 in Matlab and applied a Matlab implementation of MOPPA which can be found in [32] to solve Problem 7.With x 1 ∈ [0, 7] , x 2 ∈ [0, 5] as variable bounds, a population size of 50, maximum number of runners 5 and the number of generations to perform at 200 are chosen.The nondominated front approximated by the algorithm is shown in Figure 1.The nondominated points and their corresponding fitness values are shown in Table 1.These are sorted in decreasing order from top to bottom of the table according to the rank based fitness function incorporated in the algorithm.
We are interested in the nondominated point with the best fitness value which will serve as the Best Nondominated Point (BNP).From Table 1, the BNP is f 1 = (−6.98,−1.80) T with a fitness value of 0.96, where Note that when solving Problem 7 with exact methods, the MPNP was found to be f 1 = (−7.0,−1.80) T as would be seen in Section 8.
In Figure 2, it can be seen that MOPPA is able to approximate the nondominated frontier using the penalty function method.The nondominated points are not evenly distributed on the frontier but tend to concentrate more towards the end-points of the front.
We have also solved Problem 7 using NSGA-II [19,20] which is another approximate multi-objective evolutionary algorithm with the same settings as in Equation ( 6).We use the same variable bounds with a population size of 50, 200 generations, mutation rate of 0.02 and crossover rate of 0.8.The nondominated frontier as approximated by NSGA-II is shown in Figure 3.The nondominated points approximated by NSGA-II are in Table 2.Each of these points is assigned a rank or fitness value according to its domination level and sorted in descending and sorted in descending order from top to bottom of the table according to their crowding distances.The end-points which are fitter than other points are assigned an infinite distance value.From Table 2, the end-point f 1 = (−6.82,−1.90) T which is the first to be listed and therefore highest in ranking of the two end-points, is selected as the BNP.In Figure 3, it can be seen that NSGA-II did not only approximate the nondominated frontier, but also distribute the points evenly on the nondominated front for the problem.
In terms of the quality of nondominated points approximated by these two algorithms, it can be seen in Tables 1 and 2 that the points returned by MOPPA are of higher quality than those returned by NSGA-II.This can easily be seen that the BNP f 1 = (−6.98,−1.80) T selected from Table 1 is of higher quality and closer to the MPNP f 1 = (−7.0,−1.80) T determined from the exact methods in Section 8 than the BNP f 1 = (−6.82,−1.90) T selected from Table 2.
For comparison purposes, we will compare the BNPs returned by these two approximate methods with the MPNPs returned by the exact methods EMSA [59], BOA [10], PSA [68] and ASIMOLP [5].

Determination of Most Preferred Nondominated Points in Exact Methods
As we noted in [58] that to determine the MPNP, we employed the technique of Compromise Programming (CP) introduced by [89] and compute the ideal objective point which would serve as a reference point in each case.CP is a mathematical programming method that is based on the notion of distance of a most preferred solution from the ideal point y * [91].CP can be used to find the best nondominated point by determining the minimum distance to the ideal point [92].Ehrgott and Tenfelde-Podehl [28] note that the ideal point is an essential component of CP, and the idea is to find a nondominated point which is as close as possible to it.This is a point in the objective space whose components are the optimal values of the objective functions when they are individually optimized, [3].It was also noted in [91] that the ideal point serves as a rationale directing and facilitating human choice and decision making.To find the ideal point, we simply solve q single objective problems min c T k x, k = 1, 2, ..., q subject to x ∈ X.
We note here that, the ideal point itself is not an element of the nondominated set (y * / ∈ Y N ).Otherwise, this would mean that the objective functions are not conflicting, but it always exists in the objective space.Its corresponding point in the decision space may not exist, [3].
Having computed the ideal objective point y * , we now determine the minimum distance of each nondominated point ŷ from it by finding min { ŷ1 − y * , ŷ2 − y * , . . ., ŷn − y * } where ŷi ∈ Y N has already been found either by EMSA, BOA and PSA, . is the Euclidean norm on R q and y * is the ideal objective point.
Using the nondominated points f 1 and f 2 returned by EMSA, BOA and PSA for problem 7 yields Since, the relative distance of f 1 from the ideal point y * is 3.2 which is the smallest of the two, it therefore means that f 1 = (−7.0,−1.80) T is the closest of the two nondominated points to the ideal point y * = (−7.0,−5.0) T .Hence, f 1 is selected as the DM's most preferred nondominated point.
Next, we measure the distance of the nondominated point f 1 = (−3.6418− 3.7913) T returned by ASIMOLP for the same problem from the ideal point y * = (−7.0,−5.0) T , as was done with those returned by EMSA, BOA and PSA.It turned out that, the distance f 1 − y * = 3.5691 is bigger than 3.2 which was the closest when measuring the points returned by EMSA, BOA and PSA, thereby making the nondominated points returned by EMSA, BOA and PSA closer to the ideal point and of higher quality.
We have used this method to choose the MPNP from the nondominated sets returned by EMSA, BOA and PSA for comparison with those returned by MOPPA and NSGA-II.There is no selection of a MPNP in ASIMOLP as the algorithm computes a most preferred efficient solution and also returns the corresponding most preferred nondominated point which is also used for the comparison, [60].

Experimental Results
In this section, we provide numerical results to compare the quality of the BNP returned by MOPPA and NSGA-II which find approximate points to the MPNP computed by exact methods.Since heuristic approaches are best tested on problems for which the solutions are known, [20].Table 4 shows the numerical results for a collection of 51 existing problems from the literature ranging from small to medium and realistic MOLP instances.Problem 1 is taken from Ehrgott [26].Problems 2 to 10 are from Zeleny [91].Problems 11 to 21 are test problems from the interactive MOLP explorer (iMOLPe) of Alves et al. [3].Problems 22 to 47 are taken from Steuer [79].Problem 48 is a test problem in Bensolve-2.0 of Löhne and Weißing [53].Finally, Problems 49 to 51 are from MOPLIB [52] which stands for Multi-Objective Problem Library.
Note that problems 1 to 47 are non-degenerate.Problem 48 is such that the constraint matrix is dense with an identity matrix of order n as its criterion matrix where n is the number of variables in the problem.The RHS vector is such that all the components are zeros except for a one (1) at the begining as the only non-zero element.In Problem 49, the constraint matrix is sparse, the criterion matrix is dense and all the elements in the RHS vector are ones.Problem 50 is such that the constraint and criterion matrices are sparse while the components of the RHS vector are all zeros except for a one (1) at the centre as the only non-zero entry.Finally, Problem 51 is such that the constraint and criterion matrices are sparse while the components of the RHS vector are all zeros except for a ninety (90) at the end as the only non-zero entry.These larger instances are very challenging, numerically ill-posed and have difficult structures.
All algorithms were implemented in Matlab and executed on an Intel Core i5-2500 CPU at 3.30GHz with 16.0GB RAM.In all tests, m is the number of constraints, n the number of variables and q the number of objectives.We used the MPNPs computed from the nondominated set returned by EMSA, BOA, PSA and ASIMOLP as illustrated in Section 8 to compare with the BNP returned by MOPPA and NSGA-II.
As can be seen from Table 4, MOPPA compared favourably in terms of the quality of the BNP it returns.The algorithm returns BNPs which are of higher quality and closer to those returned by exact methods than NSGA-II, which confirms what was reported in [33] that the objective values obtained with NSGA-II are of lower quality than those obtained with MOPPA.
Of particular interest is Problems 3, 9, 11 and 23 where the points returned are exactly the same with those of the exact methods.In terms of diversity (spread) between the two approximated methods, as can be seen from Section 7.1, NSGA-II returns nondominated points that are more uniformly or evenly distributed than the nondominated front of MOPPA whose points tend to concentrate more towards the end-points of the front.
We also observed in Table 4 that some of the exact methods could not produce results for some of the numerically ill-posed and highly challenging test problems considered due to one reason or the other as stated in the table.The approximate methods on the other hand, were able to solve all these difficult instances approximately.

Summary of Results
In this section, we present the summary of experimental results discussed in the previous section in Table 3.

Conclusion
We have applied two heuristic approaches for the first time to MOLP.One, namely NSGA-II is well established and popular heuristic for continuous and discrete multiobjective optimisation.The other, MOPPA, is fairly recent addition to nature-inspired algorithms which has shown a lot of promise on continuous multi-objective optimisation, and continuous and discrete single objective optimisation.Our experimental investigation using Matlab implementations of both approaches applied to an extensive and representation set of MOLP instances has shown that the methods found on the whole good nondominated fronts.That of NSGA-II is more uniformly spread while the BNP's returned by MOPPA tend to be of better quality.The methods compare well with the exact ones especially on the large instances which the exact methods failed to solve even when given generous amounts of computation times.Constraints have been handled using a penalty function approach.

2 . 3 .
Fitness Function: This measure is essential for the implementation of GA.It allows to rank individual solutions in the population.It is often the objective function of the optimisation problem.Selection of Parents: The main idea of selection is choosing individuals from the population to be parents to new individuals.The latter are expected to be better than the parents.There are different selection methods such as the Roulette Wheel and Tournament Selection,[66].

Table 1 .
Nondominated Points and their corresponding fitness values.

Table 2 .
Nondominated Points and their corresponding Crowding distances.

Table 3 .
Summary of experimental results.

Table 4 .
Comparative results for individual problem.

Table 5 .
Comparative results for individual problem.
The image is the whole region, implying that none of the vertices is nondominated