Recently, various multiobjective particle swarm optimization (MOPSO) algorithms have been developed to efficiently and effectively solve multiobjectiv... Space exploration - Computational efficiency - Particle swarm optimization - Heuristic algorithms - Evolutionary computation - Convergence - Algorithm design and analysis - Computational complexity - Measurement - Benchmark testing - computational complexity - particle swarm optimisation - search problems - dynamic population size - adaptive local archive - particle swarm optimization - search space - computational complexity - PSO-based multiobjective optimization - dynamic population strategy - Multiobjective optimization - multiple swarm - particle swarm optimization (PSO) - population size - Algorithms - Artificial Intelligence - Biomimetics - Information Storage and Retrieval - Pattern Recognition, Automated - Population Density - Social Behavior - Multiobjective optimization - multiple swarm - particle swarm optimization (PSO) - population size

0 downloads 27 Views 2MB Size

No documents

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 38, NO. 5, OCTOBER 2008

PSO-Based Multiobjective Optimization With Dynamic Population Size and Adaptive Local Archives Wen-Fung Leong and Gary G. Yen, Senior Member, IEEE

Abstract—Recently, various multiobjective particle swarm optimization (MOPSO) algorithms have been developed to efficiently and effectively solve multiobjective optimization problems. However, the existing MOPSO designs generally adopt a notion to “estimate” a fixed population size sufficiently to explore the search space without incurring excessive computational complexity. To address the issue, this paper proposes the integration of a dynamic population strategy within the multiple-swarm MOPSO. The proposed algorithm is named dynamic population multiple-swarm MOPSO. An additional feature, adaptive local archives, is designed to improve the diversity within each swarm. Performance metrics and benchmark test functions are used to examine the performance of the proposed algorithm compared with that of five selected MOPSOs and two selected multiobjective evolutionary algorithms. In addition, the computational cost of the proposed algorithm is quantified and compared with that of the selected MOPSOs. The proposed algorithm shows competitive results with improved diversity and convergence and demands less computational cost. Index Terms—Multiobjective optimization, multiple swarm, particle swarm optimization (PSO), population size.

I. I NTRODUCTION

M

OST publications on population-based metaheuristic optimization designs for multiobjective optimization problems (MOPs), such as multiobjective evolutionary algorithms (MOEAs) [1]–[3] and multiobjective particle swarm optimizations (MOPSOs) [4]–[19], adopt the notion of using a fixed population size throughout the process of searching for possible nondominated solutions until the Pareto-optimal set is obtained. Although some may argue that a good algorithm design would assure a high probability of finding the Paretooptimal set, yet population size does indirectly contribute to the effectiveness and efficiency of the performance of an algorithm. One influence of population size on these population-based metaheuristic optimization algorithms is the computational cost. If an algorithm employs an overly large population size, it will enjoy a better chance of exploring the search space and discovering possible good solutions but inevitably suffer from an undesirable and high computational cost. On the other hand,

Manuscript received November 13, 2007; revised February 27, 2008. This paper was recommended by Associate Editor H. Takagi. The authors are with the School of Electrical and Computer Engineering, Oklahoma State University, Stillwater, OK 74078-5032 USA (e-mail: [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TSMCB.2008.925757

an algorithm with an insufficient population size may result in premature convergence or may obtain only some sections of the Pareto front. Again, one may suggest that heuristically estimating an “appropriate” population size may be adequate because one need not know the exact fitness landscape to solve a MOP. It would be the case for these MOPs that possess lower numbers of objective functions or lower dimensions in the decision space. Now, considering the MOPs that have large numbers of objective functions or large dimensions in the decision space, and even of those MOPs qualified as the hard problems [20], this will pose a great challenge to “estimate” the population size to solve these MOPs without exerting a high computational cost. In addition, without a prior knowledge about the contour of the fitness landscape in a MOP, it might be unrealistic to estimate an “appropriate” population size to kick off the search process. Hence, a compromised, yet effective, solution would be dynamically adjusting the population size to explore the search space in balance between computational cost and the attained performance. There are few publications that tackle the issue of population size. In early studies, methods to determine an optimal population size for single-objective optimization problems are presented in [21]–[24]. Motivated by these studies, Tan et al. [25] proposed an incrementing MOEA (IMOEA) that adaptively computes an appropriate, but conservative, population size according to the online discovered tradeoff surface and its desired population size that corresponds to the distribution density. Although IMOEA demands a heuristic approach to estimate the desired population size at every generation, simulation results show that IMOEA can perform better than several state-of-the-art MOEAs. Another algorithm, known as dynamic population-size MOEA (DMOEA), was proposed by Yen and Lu [26]. This algorithm includes a population-growing strategy that is based upon the converted fitness and a populationdeclining strategy that resorts to the following three qualitative indicators: age, health, and crowdedness. From the simulation results, the performance of DMOEA is competitive with or even superior to the selected MOEAs, including IMOEA [25], Non-Dominated Sorted Genetic Algorithm II (NSGA-II) [1], and Strength Pareto Evolutionary Algorithm 2 (SPEA2) [2]. Given such a strategy in adaptively tuning the population size, it was shown in [26] that population size always converges to an optimal value that is independent of two tuning parameters and the complexity of the Pareto front. In this paper, the goal is to incorporate dynamic population size into a MOPSO because particle swarm optimization (PSO)

1083-4419/$25.00 © 2008 IEEE

LEONG AND YEN: PSO-BASED MULTIOBJECTIVE OPTIMIZATION

has an advantage over evolutionary algorithm, in which PSO has a rapid convergence capability. However, PSO often faces the problem of premature convergence. Hence, multiple-swarm MOPSO is employed to promote diversity within the swarm population to deal with the problem of premature convergence. In this paper, the proposed MOPSO design involves two key features, which are dynamic swarm population size and a fixed number of multiple swarms. Design aspects that are incorporated in the proposed MOPSO include the following: 1) strategy to facilitate access on the status of the particles when the swarm population size varies; 2) strategies to dynamically adjust the swarm population in order to provide the needs of computational resources at different stages and, at the same time, to promote competition among the swarms so that convergence toward the optimal solutions and that the diversity characteristics are preserved; and 3) adaptive local archive procedure to promote diversity within each swarm. The remaining sections complete the presentation of this paper. In Section II, a brief overview of the principles of PSO and a review of relevant works of MOPSO are presented, and the proposed algorithm is elaborated in Section III. Comparative study and pertinent discussions are presented in Section IV. Finally, Section V provides concluding remarks of this paper. II. L ITERATURE S URVEY PSO is originally developed by Kennedy and Eberhart [27] for optimization. The technique was inspired by bird flocking and animal social behaviors. In PSO, the particles operate like a swarm that flies through the hyperdimensional space to search for possible optimal solutions. The behavior of the particles is influenced by their tendency to learn from their past personal experience and from the success of their peers to adjust the flying speed and direction. Recent works [4]–[19] showed that basic PSO algorithm can be modified to accommodate the problem formulation of MOP, which is to search for a wellextended, uniformly distributed, and near-optimal Pareto front [26]. Relevant works of MOPSO algorithms are reviewed and critiqued in this section to motivate the proposed ideas. As recent research developments have shown that PSO can be extended to solve MOPs, the improvement and modification of PSO to MOPSO can be broadly categorized into four research areas. Updating leaders and leader selection mechanism is one of the key modifications to basic PSO to solve for MOPs. Particles in the population converge in a swarmlike manner toward an optimal solution by following their global leader within the population. However, for solving MOPs, the goal is to search for a set of Pareto-optimal solutions. Hence, the particles must follow a set of candidate best leaders that will lead them toward the optimal solutions. The selection design must balance between diversity preservation and faster convergence. In a recent work, Hu and Eberhart [4] proposed a dynamic neighborhood PSO that includes the following three criteria: dynamic neighbors, new global best particle updating strategy, and 1-D optimization. The scheme of selecting and updating the leaders involves the particles’ neighborhood. At every iteration, each particle is assigned to the closest neighborhood. Among the solutions in

1271

the neighborhood, each particle finds the local best particle, as the global best and the local best are updated only when these solutions dominate it. Another work by Zhang et al. [6] has suggested a selection scheme for global and local leaders that involves computing the new leaders via the proposed equation, which depends upon each objective function and the current iteration, and deciding whether the particles should follow their leaders based upon the proposed criteria. Mostaghim and Teich [7] introduced the sigma method allowing the particles to select their global leaders based upon the minimum distance from the sigma values computed for each archive member. As reported, Hu and Eberhart [4] were only capable of dealing with a small number of objective functions. Zhang et al. [6] did not provide a generalized equation to determine the new global leaders for problems with high-dimensional objective functions. Also, the new global and local leaders depend solely upon the global best and local best values that correspond to each objective function, which inadvertently may result in premature convergence if global best and local best values corresponding to each objective function are very close. For Mostaghim and Teich [7], the sigma method can impose selection pressure on global leaders, and because PSO has rapid convergence capability, this may lead to premature convergence for some MOPs. In another study, Branke and Mostaghim [8] investigated the influence of the personal best particles in MOPSO. Empirical results showed that keeping personal best archive and their proposed strategies in selecting personal best from the archive produces significantly better results than traditional approaches. Above all, these MOPSOs require users to choose an “appropriate” population size heuristically because they depend upon this fixed population size of particles to explore the search space and to search for any potential global leaders to be recorded, updated, and selected by the particles. Unlike evolutionary algorithms, each particle’s flight in a search space is determined by its velocity. Particularly for complicated MOPs, it is difficult to control the velocity of each particle to perform its optimal flight in a high-dimensional search space. Hence, external archive is often used to record any good particles found at each iteration [9]. Fieldsend and Singh [10] proposed the use of unconstrained archives to overcome the inefficiency caused by truncation of constrained Pareto archives. They developed a data structure approach known as dominated tree to maintain the unconstraint archives. The dominated tree consists of a list of composite points, and each composite point is a vector of archive members. These composite points are ordered by weak dominance relation [1]. With the coordinates of these composite points and the particles’ locations, the particles can select their global leaders based upon their closeness to the archive members. However, this approach restricts the particles’ chances of selecting global leaders that belong to other composite points. Recently, Mostaghim and Teich [11] adopted the ε-dominance method to control the archive size and help reduce computational cost. Li [12] proposed a nondominated sorting particle swarm optimizer (NSPSO), in which the design elements and archive maintenance of NSGA-II are adopted in PSO design. Another prominent work was contributed by Coello Coello and Lechuga [13]. They proposed a MOPSO algorithm that incorporates the

1272

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 38, NO. 5, OCTOBER 2008

concept of Pareto dominance and adopts archive controller, which stores and decides the membership of new nondominated solutions found at each iteration. In addition, an adaptive grid feature based upon the objective function values of archive members is applied to the archive, with the goal of producing a well-distributed Pareto front. With the adaptive grid feature, global leaders are selected from the archive via fitness sharing and roulette wheel selection. Nonetheless, all of these MOPSOs are mandated to “estimate” either the fixed archive size or some parameters (i.e., ε value [11] or population size [12]) to control the archive size. In recent works, the incorporation of genetic operators such as mutation and perturbation has greatly enhanced the exploration capability of MOPSO. For example, Mostaghim and Teich [7] used the turbulence factor to enhance the exploration capability for MOPSO, and Coello Coello et al. [14] proposed the use of mutation operator to improve the exploration capability on the MOPSO presented in [13]. In addition, Sierra and Coello Coello [15] suggested a new MOPSO, which is also known as OMOPSO. In their design, the population is divided into three subswarms of equal size. Each subswarm adapted to a different mutation operator. In doing so, the ability of exploration and exploitation was enhanced during the search process. Accompanying with other design elements, their proposed algorithm showed good performance compared with the existing evolutionary algorithm. A recent MOPSO or elitistmutated (EM)-MOPSO [16] also incorporates an efficient mutation strategy called elitist mutation to enhance exploration and exploitation in the search space. Although genetic operators are adopted by MOPSO, the selection of an appropriate initial population size plays an important role in homogeneously exploring the high-dimensional search space. To further improve the performance of PSO, Toscano Pulido and Coello Coello [17] recently proposed an MOPSO algorithm that implements the subdivision of the decision space into multiple subswarms via clustering techniques. In this paper, it is referred to as the multiswarm MOPSO or simply cMOPSO. Their goal was to improve the diversity of solutions on the Pareto front. At some point during the search process, different subswarms exchange information, as each subswarm chooses a different leader other than its own to preserve diversity. Although cMOPSO has shown promising results when compared with NSGA-II [1], Pareto archived evolution strategy (PAES) [3], and MOPSO [13], the number of particles in each swarm that plays a critical role is predetermined ad hoc. III. DMOPSO Approximation to the Pareto-optimal set involves the following two distinct objectives: 1) to obtain a nondominated front that is close to the true Pareto front and 2) to maintain the diversity of the solutions along the resulting Pareto front. As stated in [26], these two objectives are competing because the diversity preservation process will slow down the convergence speed or may even degrade the quality of the resulting Pareto front. In one aspect, like a generic PSO, MOPSO has the tendency to fly as a swarm in the search space and quickly sends solutions to the optimal regions. On the other hand, the

“swarmlike” behavior must be properly controlled to explore the undiscovered area in the search space and to produce a uniformly distributed Pareto front. It is difficult to deal with this contradictory issue for a MOPSO with a fixed population size because a predetermined computation resource has to be allocated and properly distributed between two competing objectives. Therefore, instead of figuring out how to estimate an appropriate swarm population size, a dynamic population multiple-swarm MOPSO (DMOPSO) is proposed herein. A. Algorithm Overview The proposed algorithm, DMOPSO, is inspired by Toscano Pulido and Coello Coello [17] and incorporates the following four proposed strategies: 1) cell-based rank density estimation scheme to keep track of the rank and density values of the particles; 2) population-growing strategy to increase the population size to promote exploration capability; 3) populationdeclining strategy to prevent the population size from growing excessively; and 4) adaptive local archives designed to improve the distributed solutions along the sections of the Pareto front that associate with each subswarm. The generic steps of DMOPSO are as follows. First, based upon a preset number of subswarms, every subswarm of particles is initialized, and the cell-based rank density estimation scheme is applied to initialize the rank and density values of the particles. Second, the group leaders of each subswarm are identified by the rank matrix. Third, all of the group leaders from subswarms are collected to form the set of global leaders (Gleader). Afterward, a clustering algorithm is applied to the Gleader to group the leaders, where the number of groups is determined by the number of subswarms chosen. The clustering is done with respect to closeness in the decision space [17]. Next, the resulting groups are assigned to their subswarm on the condition that the number of internal iteration (st) does not reach the user-defined maximum internal iteration value (stmax). Otherwise, the resulting groups are randomly assigned to any subswarm, and the internal iteration is reset to zero. The internal iteration provides a chance for all of the subswarms to share their group leaders. After that, the population-growing strategy is applied, if the conditions are met, to increase the number of particles, whereas the adaptive local archive scheme is applied to the group leaders of each subswarm to preserve the diversity. Next, the particles in each subswarm will select the leaders from their group leaders. As soon as the leader selection process is completed, the particles perform flight and update their personal best (pbest). Again, the following steps are repeated for all of the particles in their subswarm: 1) The particles’ information is updated via the cell-based rank density estimation scheme; 2) the group leaders from subswarms are determined; and 3) all of the group leaders are merged into the Gleader. After these steps are finished, the population-declining strategy is performed to reduce the population size. These steps are performed until they reach the maximum iteration. Unique designs of the four strategies in supporting dynamic population are elaborated in the following sections. Fig. 1 shows the pseudocode of DMOPSO involving four newly developed strategies highlighted in boldface.

LEONG AND YEN: PSO-BASED MULTIOBJECTIVE OPTIMIZATION

Fig. 1.

1273

Pseudocode of DMOPSO.

B. Cell-Based Rank Density Estimation Scheme With dynamic population size in DMOPSO, adding and removing the particles throughout the search process will directly affect the swarm population distribution. In general, Paretoranking method and niching strategy are applied to update the Pareto rank of the particles and estimate the density of the particles, as the swarm population progresses at each iteration. Instead of using these two separate techniques, i.e., Paretoranking method and a niching technique, we adopt a scheme that would effectively keep track of the Pareto rank as well as the density information of the particles when needed. The original m-dimensional objective space is divided into K1 × K2 × · · · × Km cells (i.e., grids); thus, the cell width in the ith objective dimension di can be calculated as max fi (x) − min fi (x) di =

x∈X

x∈X

Ki

,

i = 1, . . . , m

(1)

Fig. 2. Illustration of the cell-based rank density estimation scheme.

where di is the width of the cell in the ith dimension, fi refers to the ith objective, Ki denotes the number of cells designated for the ith dimension (i.e., in Fig. 2, K1 = 6 and K2 = 6),

1274

Fig. 3.

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 38, NO. 5, OCTOBER 2008

(a) Estimated objective space and divided cells. (b) Initial rank matrix of the given objective space. (c) Initial density matrix of the given objective space.

Fig. 4. (a) Swarm population and the location of each particle. (b) Rank matrix of swarm population. (c) Density matrix of swarm population.

and x is taken from the whole decision space X, and we denote Fimin = min fi (x) x∈X

Fimax = max fi (x), x∈X

i = 1, . . . , m. (2)

The grid scales Ki , where i = 1, . . . , m, are chosen heuristically, and prior knowledge would be desired in the choice of the grid scales. As shown in Fig. 2, point c is denoted as the origin of the current objective space. In other words, c is the cross point of all of the lower boundaries of an m-dimensional objective space. The position of c is denoted min ). For a newly generated particle S, as (F1min , F2min , . . . , Fm whose position is (s1 , s2 , . . . , sm ) in the objective space, the distance between point S and point c will be measured in each dimension in the objective space as (t1 , t2 , . . . , tm ), where ti = si − Fimin ,

i = 1, . . . , m.

(3)

Therefore, the “home address” of particle S in the ith dimension is calculated as hi = mod(ti , di ) + 1,

i = 1, . . . , m

(4)

where function mod(x, y) represents the modulus (integer part) after division x/y. Therefore, by this setting, finding the grid location (home address) of a particle requires only m “division” operations. For example, in Fig. 2, the “home address” for particle S is (4, 5). Figs. 3 and 4 show examples to demonstrate the cell-based rank density estimation scheme. Assume that, in a 2-D

objective space, m = 2, the objective space is determined via (2) and divided [using (1)] into 6 × 6 cells [Fig. 3(a)]. Then, the center position of each cell is obtained, and two matrices are set up following the same cell configuration as Fig. 3(a). These two matrices store the rank and density values of each cell, which initially have all 1’s and 0’s, respectively [shown in Fig. 3(b) and (c)]. When there is population occupied [Fig. 4(a)], the particles’ home addresses are identified using (3) and (4). The rank and density matrices in Fig. 4(b) and (c) show how the information of the particles is stored. When a new particle is added, its home address is located; the rank values of the cells dominated by its “home” will be increased by one, and the density value of its “home” will be increased by one. Meanwhile, if an existing particle is removed, its “home” will be notified. The rank values of the cells dominated by its “home” and the density value of their “home” will all be decreased by one. Note that the ranking technique is known as automatic accumulated ranking strategy used in [29]. A particle obtains its rank and density values by accessing its “home address.” For instance, in Fig. 4, the “home address,” rank, and density values of particle A are (5, 2), 2, and 1, respectively. It is important that the estimated objective space fi , where i = 1, . . . , m, is chosen appropriately so that a newly generated or a removed particle does not change the boundaries of the range of the current objective space (Fimin and Fimax , where i = 1, . . . , m). Consequently, the size of each cell will not change. By this design, the original objective of searching for a wellextended, near-optimal, and uniformly distributed Pareto front has been converted to locate as many optimal “home addresses”

LEONG AND YEN: PSO-BASED MULTIOBJECTIVE OPTIMIZATION

Fig. 5.

1275

Pseudocode of the cell-based rank density estimation scheme.

Fig. 6. (a) Current swarm population and the location of each particle. (b) Rank value matrix of swarm population. (c) Density value matrix of swarm population. (d) Example of “selected” particles D and E.

as possible, and each of which contains a fixed number of these particles. The pseudocode of the cell-based rank density estimation scheme is shown in Fig. 5. C. Perturbation-Based Swarm-Population-Growing Strategy The objective of the swarm-population-growing strategy is to increase the swarm population size as needed to ensure sufficient number of particles to contribute to the search process and to place those new particles in unexplored areas to discover new possible solutions. A set of procedures is proposed to facilitate exploration and exploitation capabilities for DMOPSO and, at the same time, to increase the population size. Procedure 1: The potential candidate particles to be perturbed must have the highest probability of generating new particles that will improve the convergence toward the Pareto front. In this case, the nondominated set is considered as candidate particles because they have higher chances of generating fitter particles which will improve the convergence toward the

Pareto front. From the size of the nondominated set at current iteration, a random number is used to stochastically determine the number of potential particles from the nondominated set to be chosen. The number of potential particles is determined via ns = r1 × (total no. of particles in nondominated set) (5) where r1 denotes a random number obtained from a uniform distribution within [0, 1] and ns denotes the number of particles to be selected to perturb. · represents the floor function. Once ns is determined, the number of potential particles is randomly selected from the nondominated set. Fig. 6(a)–(c) present an example of Procedure 1. Based upon the information presented on the rank and density matrices, the total number of particles in the nondominated set of the current swarm population is equal to five. Assume that ns is chosen to be two; two particles (D and E) are randomly chosen as the candidate particles [Fig. 6(d)]. Note that these potential particles, once chosen, are referred to as selected particles.

1276

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 38, NO. 5, OCTOBER 2008

Fig. 7. Number of perturbations per particle (np) versus iteration (t).

Procedure 2: The number of perturbations of the selected particle is adaptively determined in every iteration. Each selected particle’s responsibility is to generate a certain number of new particles from the selected particle. A probability value is used to determine the number of perturbations adaptively in which the number of perturbations (number of new particles to be generated) is bound by the minimum and maximum number of perturbations. Assume that, at iteration t, the number of perturbations for each selected particle np(t) is determined by (6), which is similar to [25] np(t) = ⎧ ⎪ ⎪(unp−lnp) ⎪ ⎪ ⎪ ⎨ × 1−2×

t

2

tmax

lnp + unp−lnp ,

(unp−lnp) ⎪

⎪ 2 ⎪ ⎪ lnp t−tmax ⎪ + unp−lnp , ⎩ × 2× tmax

0 ≤ t ≤ (tmax/2) (tmax/2) < t ≤ tmax (6)

where tmax is denoted as the maximum iterations, lnp is the minimum number of perturbations, and unp is the maximum number of perturbations. unp is determined by the maximum allowable perturbations for each particle, whereas lnp is determined on the basis of the minimum number of perturbations required for neighborhood exploration. Fig. 7 is the illustration of (6). Procedure 3: Similar to [25], the concept of perturbations within and beyond the neighborhood to balance the exploitation and exploration capabilities of DMOPSO is adopted. To avoid generating too many new particles from being too far away from the selected particles, it is necessary to generate a higher number of new particles within the neighborhood than outside of the neighborhood. In order to achieve this goal, a set of equations is proposed as follows: r2 = abs (Gaussian(0, 1/9))

(7)

ld = rld × xL j

(8)

ud = rud × xU j ⎧ ld 2 ⎪ (ud−ld)× 2×(r ) + ⎪ 2 ⎨ ud−ld ,

(9)

Δd(r2 ) =

0 ≤ r2 ≤ 0.5

(ud−ld) ⎪ ⎪ ⎩ × 1− 2×(r2 −1)2 + ld ud−ld , 0.5 ≤ r2 ≤ 1 (10)

xi,j = xi,j + Δd(r2 ).

(11)

Fig. 8.

The additional distance Δd(r2 ) versus r2 .

Equation (10) is used to determine the additional distance from the selected particle corresponding to the decision space. The Δd is defined according to the function of r2 (see Fig. 8). Several parameters are needed to compute (10). These parameters are r2 , ld, and ud in which they can be computed via (7)–(9), respectively. In (8), ld is denoted as the minimum additional distance. It is computed by multiplying the parameters rld and xL j , where rld is the user-defined lower bound ratio and L xj is the lower bound of the decision variable x in dimension j. Parameter ud is denoted as the maximum additional distance. Equation (9) shows how the parameter ud is calculated, where rud is the user-defined upper bound ratio and xU j is the upper bound of the decision variable x in dimension j. In this paper, the parameters rld and rud are selected within the ranges of [0, 0.02] and (0.02, 0.7], respectively. As presented in (7), the parameter r2 is the absolute value of a random number in which the random number is drawn from the Gaussian distribution with zero mean and a variance of 1/9. With the mean 0 and variance (σ 2 ), 1/9, more random numbers will be generated near the lower end of the range, i.e., [0, 3σ/2], whereas less random numbers will be generated near the upper range, i.e., (3σ/2, 3σ]. Once the Δd is computed, it is added to the decision variable of the selected particle i in dimension j, i.e., xi,j [(11)]. Notice that, because the resulted r2 value is more likely to be less than or equal to 0.5 [according to (10)], it is more likely that Δd will be small (Fig. 8). Consequently, the new xi,j value will likely lie within the neighborhood rather than outside of the neighborhood. Fig. 9(a)–(c) shows an illustration of Procedure 3. In Fig. 9(b), the inside neighborhood of particle E [from Fig. 9(a)] in the decision space is bounded by the circumference of the inner circle. The outside neighborhood is the area between the inner and outer circles, where the radius of the outer circle is ud (as shown in Fig. 8). Fig. 9(b) also shows that a new particle is generated by computing (7)–(11), and it is denoted as particle E1. Particle E1 is mapped to the objective space shown in Fig. 9(c). As shown in Fig. 9(c), using Procedure 3, particles E1 and E2 are generated by particle E, whereas particles D1 and D2 are generated by particle D. Fig. 10 shows the pseudocode of the swarm-population-growing strategy. D. Swarm-Population-Declining Strategy To prevent the excessive growth in swarm population, a population-declining strategy is proposed to adaptively control

LEONG AND YEN: PSO-BASED MULTIOBJECTIVE OPTIMIZATION

1277

Fig. 9. (a) Selected particles (D and E) from Fig. 6(d). (b) Representation of (9) in the decision space. (c) Current swarm population and newly added particles in the objective space.

Fig. 10. Pseudocode of the swarm-population-growing strategy.

the swarm population size. In DMOPSO, the condition to remove a particle depends upon two indicators, i.e., rank and crowdedness indicators. With these indicators, the deletion of the particles is solely based on their contribution to the search process and their distribution density in the objective space. In this case, these indicators are computed on the basis of the values in the rank and density matrices, as presented in Section III-B, and they are utilized to select potential particles to be deleted. Then, a selection ratio is implemented to regulate the number of particles to be removed and to provide some degrees of diversity preservation at the same time. First, rank and crowdedness indicators are introduced, and then, the procedures for swarm-population-declining strategy are elaborated. Rank Indicator: Imposed in this indicator is the idea that particles that are far from the nondominated front will have lesser chance to remain in the next iteration because they have a higher chance of “losing” their leaders. This means that the

particles that are far from the nondominated front are likely to be eliminated. The rank value of a cell obtained from the rank matrix is converted into a rank indicator in order to measure the dominance status of a cell compared with the others. Fig. 11(b) shows the rank matrix of the current population shown in Fig. 11(a). Assume that, at iteration t, the cell c in which particle i is located has the rank value of rank(ci , t); the rank indicator of particle i located at cell c at iteration t, R(i, t), is given as R(i, t) =

1 . rank(ci , t)

(12)

Equation (12) indicates that a particle that resides in the cell with rank value “1” [e.g., particle F in Fig. 11(a)] will have its rank indicator, R value, equal to one, as shown in Fig. 11(c). The particle in a cell with a higher rank value will result in a

1278

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 38, NO. 5, OCTOBER 2008

Fig. 11. (a) Current swarm population and the location of each particle. (b) Rank matrix of the current swarm population. (c) R values for particles F and G.

Fig. 12. (a) Current swarm population and the location of each particle. (b) Density matrix of the current swarm population. (c) D values for particles F and G.

lower R value [refer to Fig. 11(b)]. The rank value of particle G is higher than the rank value of particle F, and Fig. 11(c) shows that the resulting R value is much lower, which is 0.11. Hence, as the R value of a particle decreases, it implies that the particle has an increasing chance of being eliminated because it is farther from the nondominated front. Note that 0 < R(i, t) ≤ 1. Crowdedness Indicator: This indicator involves the control of local population size, i.e., population size per cell. The population size per cell is regarded as the density value of a cell, which is defined as the number of particles located in a cell. By using the current density information of a concern cell in which the information can be found in the density matrix, the crowdedness indicator of a particle in a concern cell can be computed. Fig. 12(b) shows the density matrix of the current swarm population, which is shown in Fig. 12(a). At iteration t, the cell c in which particle i is located has the density value density(ci , t). The crowdedness indicator of particle i located at cell c at iteration t, D(i, t), is defined as D(i, t) =

ppv 1 − density(c ,t) , if density (ci , t) > ppv i (13) 0, otherwise.

Note that ppv value represents the desired local population size per cell, and it is a user-defined parameter. Equation (13) shows that a cell with high density value will have a higher D value closer to one. In Fig. 12(b), the density value of particle F is equal to three. With the ppv value being set to two, the D value of particle F is equal to 0.33 [refer to Fig. 12(c)]. On the other hand, if a cell has a density value that is lower than or equal to ppv, then the D value is equal to zero [particle G in Fig. 12(c)]. This indicates that if the particles reside in a concern

cell that has more than the desired local population size, then these particles are likely to be eliminated to reduce the level of congestion in the concern cell. Note that 0 ≤ D(i, t) < 1. Likelihood of Removing the Particle: At iteration t, the likelihood that the particle i with rank indicator R(i, t) and crowdedness indicator D(i, t) is to be eliminated is computed by using the following equation: L(i, t) = (1 − R(i, t)) × D(i, t).

(14)

Equation (14) implies that, for those particles that have low R values (i.e., away from the nondominated front) or have high D values (i.e., located in the crowded cells), these particles will have high likelihood value L (refer to Figs. 11 and 12). The L values for particles F and G are both zero, implying that they are either located in a nondominated front or a noncrowded cell. To determine whether a selected particle i will be removed, a random number with uniform distribution between [0, 1] is generated to compare with the likelihood L(i, t). If the likelihood is larger than the random number, then particle i is selected as a potential candidate to be eliminated from the swarm population. Note that, at iteration t, all selected particles to be eliminated are stored in a temporary memory Mt . Then, the selection ratio is applied to determine the exact number of particles in Mt to be eliminated from the swarm population. Selection Ratio: If the removal of particles is only based upon the L value, then there is a possibility of eliminating an excessively large quantity of particles in which some may carry unique schema to contribute in the search process. A selection ratio that is inspired by Coello Coello and Montes [30] is used to stochastically allocate a small percentage of particles in the population for removal. Hence, given a selection ratio,

LEONG AND YEN: PSO-BASED MULTIOBJECTIVE OPTIMIZATION

1279

Fig. 13. Pseudocode of the swarm-population-declining strategy.

Fig. 14. (a) Two group leaders are grouped via clustering algorithm. (b) Two group leaders in the decision space are mapped to the objective space. (c) Adaptive grid procedure is applied to the local archive of G1.

Fig. 15. Pseudocode of the adaptive local archive procedure.

S ∈ [0, 1], at iteration t, the equation to compute the number of particles with high likelihood L to be eliminated is given as Popremove (t) = S × |Mt |

(15)

where Popremove (t) is the allocated number of particles in the population for elimination and |Mt | denotes the population size in Mt at iteration t. Note that the choice of the selection ratio is dependent upon the user’s preference, where it can be a function

1280

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 38, NO. 5, OCTOBER 2008

TABLE I PARAMETER CONFIGURATIONS FOR FIVE SELECTED MOPSOS

TABLE II PARAMETER CONFIGURATIONS FOR DMOPSO WITH NUMBER OF ITERATIONS BASED UPON 20 000 EVALUATIONS

of the swarm population size at each iteration or it can be a fixed ratio. For this paper, the selection ratio is a fixed number, which is set to be a small number, i.e., S ≤ 0.2. With a small selection ratio, there is a possibility that those selected particles in Mt may not be eliminated. In other words, some of the selected particles in Mt whose rank indicators are low or that are located in crowded cells may remain in the next iteration. In addition, a small selection ratio can prevent the removal of an uncontrollable large number of particles while providing some degree of diversity preservation. Fig. 13 shows the pseudocode of the swarm-population-declining strategy. E. Adaptive Local Archive and Group-Leader Selection Procedures In cMOPSO [17], based upon a probability value, the particles in a subswarm randomly select the assigned group leaders because all resulting group leaders are grouped via a clustering algorithm. Random selection can provide equal probability of group leaders being chosen as the leader for a particle and has a higher probability of achieving tightly grouped solutions that are close to the true Pareto front [31]. Yet, the resulting Pareto

front may not well extend into the complete Pareto front. For this reason, the idea of local search procedure is adopted to improve the solutions in each swarm [32]. Hence, the idea of local search procedure, which is known as the adaptive local archive, is proposed. Similar to the adaptive grid procedure proposed in [3] and [14], the aim of adaptive local archive is to improve the diversity in sections of the Pareto front, which associate with each subswarm. The following presents the adaptive local archive and group-leader selection procedures. Adaptive Local Archive Procedure: Once clustering algorithm is applied to the Gleader to group the leaders in the decision space; each group leader is now referred to as the local archive. Each local archive contains the group leaders that correspond to their subswarm [e.g., G1 and G2 shown in Fig. 14(a) and (b)]. For the purpose of visualization, m is chosen to be two in Fig. 14. In each local archive, with the group leaders’ objective values, the objective space is divided into a set of cells by using the adaptive grid procedure. Then, each particle chooses its group leader by following the groupleader selection procedure. In each local archive, the number of particles that the cell contains is recorded. At each iteration, if any new group leaders lie outside the current bound of the grid,

LEONG AND YEN: PSO-BASED MULTIOBJECTIVE OPTIMIZATION

1281

Fig. 16. Box plot of the hypervolume indicator (IH values) for all test functions (starting from the top left) by algorithms 1–6 represented (in order): DMOPSO, OMOPSO, MOPSO, EM-MOPSO, sMOPSO, and NSPSO. TABLE III DISTRIBUTION OF IH VALUES TESTED BY USING THE MANN–WHITNEY RANK-SUM TEST. THE TABLE PRESENTS THE z- AND p-VALUES WITH RESPECT TO THE ALTERNATIVE HYPOTHESIS (I . E., p < a = 0.05) FOR EACH PAIR OF DMOPSO AND A SELECTED MOPSO. IN EACH CELL, BOTH VALUES ARE PRESENTED IN PARENTHESES: (z-VALUE, p-VALUE). THE DISTRIBUTION OF DMOPSO IS SIGNIFICANTLY DIFFERENT THAN THOSE SELECTED MOPSOS UNLESS STATED [34]

then the objective space is redivided on the basis of the new fitness values. Each particle is relocated to its nearest cell, and the number of particles that the cell contains is also updated. For simplicity, in this paper, the number of cells is predetermined from a user-defined number of grid subdivisions or Ka for all dimensions. This means that the m-dimensional objective space is divided into K1 × K2 × · · · × Km = Ka × Ka × · · · × Ka cells. Fig. 14(c) shows that the number of grid subdivisions Ka is equal to four. Fig. 15 shows the pseudocode of the adaptive local archive procedure. Group-Leader Selection Procedure: After the adaptive local archive procedure is completed, the information on the number

of “occupants” in the cells of the local archive is utilized. These cells that contain more than one particle are first assigned a fixed value. With the idea of fitness sharing, the fixed values of the cells are divided by the number of particles that they contain. For simplicity, each resulting value of a cell will be defined as FA (a), where a represents a cell in the local archive. Next, by using all available FA values, roulette wheel selection is applied to select the cell. In the selected cell, particle i will randomly select one of the “occupants” within the cell. The idea of applying the fitness sharing is to measure the level of congestion in each cell. Those cells that are highly congested will have low FA values or vice versa. With roulette wheel

1282

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 38, NO. 5, OCTOBER 2008

Fig. 17. Box plot based upon the additive binary epsilon indicator (Iε+ values) on test function ZDT1 (algorithms 1–5 are referred to as OMOPSO, MOPSO, EM-MOPSO, sMOPSO, and NSPSO, respectively).

Fig. 18. Box plot based upon the additive binary epsilon indicator (Iε+ values) on test function ZDT2 (algorithms 1–5 are referred to as OMOPSO, MOPSO, EM-MOPSO, sMOPSO, and NSPSO, respectively).

Fig. 19. Box plot based upon the additive binary epsilon indicator (Iε+ values) on test function ZDT3 (algorithms 1–5 are referred to as OMOPSO, MOPSO, EM-MOPSO, sMOPSO, and NSPSO, respectively).

selection, this selection scheme favors the least congested cell. As a result, the particle will choose one of the group leaders that resides in the least congested cell. Therefore, the leaders are selected in such a way that diversity is preserved. IV. C OMPARATIVE S TUDY Three experiments are performed. The first experiment evaluates the performance of DMOPSO against five selected MOPSOs, the second experiment evaluates the performance of DMOPSO against two best known selected MOEAs, whereas the third experiment compares the computational cost of DMOPSO with the chosen MOPSOs.

A. Selected Performance Metrics Both quantitative and qualitative comparisons are made to validate the proposed DMOPSO against the selected MOPSOs and MOEAs. For qualitative comparison, the plots of final Pareto fronts are presented for visualization. As for quantitative comparison, two performance metrics are used to measure the performance of algorithms with respect to dominance relations. The results are illustrated by statistical box plots. Hypervolume Indicator (S Metric) [33]: Assuming a minimization problem, this unary indicator calculates the size of the region covered by a reference point. Larger values indicate that the nondominated set produced is better. The advantage of this

LEONG AND YEN: PSO-BASED MULTIOBJECTIVE OPTIMIZATION

1283

Fig. 20. Box plot based upon the additive binary epsilon indicator (Iε+ values) on test function ZDT4 (algorithms 1–5 are referred to as OMOPSO, MOPSO, EM-MOPSO, sMOPSO, and NSPSO, respectively).

Fig. 21. Box plot based upon the additive binary epsilon indicator (Iε+ values) on test function ZDT6 (algorithms 1–5 are referred to as OMOPSO, MOPSO, EM-MOPSO, sMOPSO, and NSPSO, respectively).

Fig. 22. Box plot based upon the additive binary epsilon indicator (Iε+ values) on test function DTLZ2 (algorithms 1–5 are referred to as OMOPSO, MOPSO, EM-MOPSO, sMOPSO, and NSPSO, respectively).

indicator is that it is able to measure both diversity and how well the algorithm converges to the true Pareto front. Given two nondominated sets A and B, with the same reference point, the hypervolume indicator of A is denoted as IH (A), and the hypervolume indicator of B is denoted as IH (B). If IH (A) > IH (B), then B is not better than A for all pairs. This means that a certain portion of the objective space is dominated by A and not by B. The Mann–Whitney rank-sum test is implemented to test for significant difference between two independent samples [34].

Additive Binary Epsilon Indicator [35]: This binary indicator aims at detecting whether a nondominated set is better than another. Given two nondominated sets A and B, the additive binary epsilon indicators for the pair are denoted as Iε+ (A, B) and Iε+ (B, A). If Iε+ (A, B) < 0 and Iε+ (B, A) > 0, then A is strictly better than B. If Iε+ (B, A) ≤ 0 and Iε+ (A, B) < Iε+ (B, A), then this implies that A weakly dominates B. Finally, if Iε+ (A, B) > 0 and Iε+ (B, A) > 0, then AB, which indicates that A and B are incomparable. Again, the Mann–Whitney rank-sum test is implemented to check if

1284

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 38, NO. 5, OCTOBER 2008

TABLE IV DISTRIBUTION OF Iε+ VALUES TESTED BY USING THE MANN–WHITNEY RANK-SUM TEST. THE TABLE PRESENTS THE z- AND p-VALUES WITH RESPECT TO THE ALTERNATIVE HYPOTHESIS (I . E., p < a = 0.05) FOR EACH PAIR OF DMOPSO AND A SELECTED MOPSO. IN EACH CELL, BOTH VALUES ARE PRESENTED IN PARENTHESES: (z-VALUE, p-VALUE). FOR SIMPLICITY, DMOPSO IS REPRESENTED BY A, AND ALGORITHMS B1–B5 ARE REFERRED TO AS OMOPSO, MOPSO, EM-MOPSO, SMOPSO, AND NSPSO, RESPECTIVELY. THE DISTRIBUTION OF DMOPSO IS SIGNIFICANTLY DIFFERENT THAN THOSE SELECTED MOPSOS UNLESS STATED [34]

Fig. 23. Pareto fronts produced by (a) DMOPSO, (b) OMOPSO, (c) MOPSO, (d) EM-MOPSO, (e) sMOPSO, and (f) NSPSO on test function ZDT1.

there is significant difference between the two distributions for Iε+ (A, B) and Iε+ (B, A) [34]. B. Experiment 1: Performance Evaluation of DMOPSO Against the Selected MOPSOs 1) Test Function Suite: To compare the performance of DMOPSO with that of the selected MOPSOs, the following six benchmark test problems are used [36], [37]: ZDT1, ZDT2, ZDT3, ZDT4, ZTD6, and DTLZ2. The first five test problems are two objective minimization problems. ZDT1 has convex Pareto fronts, whereas ZDT2 has nonconvex Pareto fronts. ZDT1 and ZDT2 challenge the algorithms’ ability to find and produce a quality spread of the Pareto front. ZDT3 possesses a nonconvex and disconnected Pareto front. It exploits the

algorithms’ ability to search for all of the disconnected regions and to maintain a uniform spread on those regions. The difficulty of ZDT4 is finding the global Pareto front in all of the 219 local segments. Hence, it presents a complexity with multimodality characteristic. Finally, ZDT6 has a nonconvex Pareto front. Its difficulties rest on the low density of solutions across the nonconvex Pareto front and the nonuniform spread of solutions along the Pareto front. The last test problem, DTLZ2, involves three objective functions and 12 decision variables. Its true Pareto front is on the first quadrant of a unit sphere. Note that the number of decision variables is set to 100 for all two objective benchmark test problems instead of the standard number, i.e., 30. This will allow us to exploit all MOPSOs chosen when encountering a higher number of decision variables.

LEONG AND YEN: PSO-BASED MULTIOBJECTIVE OPTIMIZATION

1285

Fig. 24. Pareto fronts produced by (a) DMOPSO, (b) OMOPSO, (c) MOPSO, (d) EM-MOPSO, (e) sMOPSO, and (f) NSPSO on test function ZDT2.

Fig. 25. Pareto fronts produced by (a) DMOPSO, (b) OMOPSO, (c) MOPSO, (d) EM-MOPSO, (e) sMOPSO, and (f) NSPSO on test function ZDT3.

2) Parameter Settings for the First Experiment: Five MOPSOs are selected for performance and computational cost comparison. They are OMOPSO [15], MOPSO [14], EM-MOPSO [16], Sigma-MOPSO (sMOPSO) [7], and NSPSO

[12]. Each algorithm is set to perform 20 000 fitness function evaluations. The parameter configurations for all selected MOPSO algorithms are summarized in Table I, whereas Table II presents DMOPSO’s parameter configurations for

1286

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 38, NO. 5, OCTOBER 2008

Fig. 26. Pareto fronts produced by (a) DMOPSO, (b) OMOPSO, (c) MOPSO, (d) EM-MOPSO, (e) sMOPSO, and (f) NSPSO on test function ZDT4.

Fig. 27. Pareto fronts produced by (a) DMOPSO, (b) OMOPSO, (c) MOPSO, (d) EM-MOPSO, (e) sMOPSO, and (f) NSPSO on test function ZDT6.

each test function. Due to dynamic swarm population size, the stopping criterion for DMOPSO is 20 000 fitness function evaluations instead of the maximum number of iterations. Note that all of the algorithms produced final Pareto fronts of fixed-

size population, except for DMOPSO, which do not have fixed archive sizes. All of the algorithms are implemented in Matlab. In this paper, all of the algorithms use a real-number representation for decision variables. However, binary representation of

LEONG AND YEN: PSO-BASED MULTIOBJECTIVE OPTIMIZATION

1287

Fig. 28. Pareto fronts produced by (a) DMOPSO, (b) OMOPSO, (c) MOPSO, (d) EM-MOPSO, (e) sMOPSO, and (f) NSPSO on test function DTLZ2.

decision variables can be easily adopted if necessary. For each experiment, 50 independent runs were conducted to collect the statistical results. 3) Simulation Results and Analysis: The performance metric for hypervolume indicator (IH value) is computed for each MOPSO over 50 independent runs. Fig. 16 shows the box plots of IH values found from all MOPSOs. The figure shows that DMOPSO and MOPSO share the highest IH values for most test functions, except for test function DTLZ2. Aside from MOPSO, EM-MOPSO also achieved the highest IH values, except for ZDT4 and DTLZ2. DMOPSO achieves the highest IH value for DTLZ2. High IH value indicates the ability of the algorithm to dominate a larger region in the objective space. It is hard to determine whether DMOPSO is significantly better than MOPSO for test functions ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6 because they attain the relative close IH values from Fig. 16. Hence, the Mann–Whitney rank-sum test is used to examine the distribution of the IH values. The tested results are presented in Table III. Observe the results in Table III: DMOPSO and MOPSO share the same victory for test functions ZDT1, ZDT2, ZDT3, and ZDT4, whereas the results also show that there is no difference in performance on test functions ZDT2 and ZDT3 for DMOPSO and EM-MOPSO. Only for function ZDT2 does OMOPSO share the winner’s slot with DMOPSO, MOPSO, and EM-MOPSO. For the rest of the MOPSOs (i.e., sMOPSO and NSPSO), DMOPSO clearly performed better. In addition, Fig. 16 shows that the standard deviations for DMOPSO are consistently lower, except for DTLZ2, which indicates DMOPSO is more reliable in producing better solutions in terms of diversity and convergence than those selected two-objective test problems. Figs. 17–22 show the results (in box plots) for additive binary ε-indicator, where each figure gives the results for a test function. Each figure presents two box plots of Iε+ (DMOPSO, X1−5 ) and Iε+ (X1−5 , DMOPSO), in which

algorithms 1–5 represent OMOPSO, MOPSO, EM-MOPSO, sMOPSO, and NSPSO, respectively. It seems that DMOPSO performs relatively better with respect to dominance relation than most of the MOPSOs (i.e., OMOPSO, sMOPSO, and NSPSO) for functions ZDT1–ZDT6. For example, DMOPSO strictly dominates NSPSO on function ZDT1 because Iε+ (DMOPSO, X5 ) ≈ 0 and Iε+ (X5 , DMOPSO) 0; similarly, this applies to all algorithms. Figs. 17–21 show that algorithm MOPSO seems to perform, as well as DMOPSO, for functions ZDT1–ZDT6. One obvious case is shown in Fig. 18 (for function ZDT2) because Iε+ (DMOPSO, X2 ) ≈ 0 and Iε+ (X2 , DMOPSO) ≈ 0. For EM-MOPSO, the figures show that EM-MOPSO and DMOPSO share good performance on ZDT1, ZDT2, ZDT3, and ZDT6. Moreover, we can observe that DMOPSO has lower standard deviations, which are consistent with those shown in Fig. 16. The box plot on DTLZ2 in Fig. 22 may show that DMOPSO does not strictly dominate the rest of the MOPSOs because Iε+ (DMOPSO, X1−5 ) > 0 and Iε+ (X1−5 , DMOPSO) > 0. For further analysis, the distributions of Iε+ values are tested by using the Mann–Whitney rank-sum test, which are presented in Table IV. Table IV also confirms that MOPSO, as well as EM-MOPSO, performs equally well as DMOPSO on function ZDT2. Surprisingly, the p-values that correspond to function DTLZ2 (in Table IV) show that DMOPSO performs better than all MOPSO algorithms, except for NSPSO, where it did as well as DMOPSO. Hence, when we combine the results given in Fig. 22 and Table IV for function DTLZ2, we can conclude that DMOPSO at least weakly dominates algorithms OMOPSO, MOPSO, EM-MOPSO, and sMOPSO for some test problems. In general, the results in Table IV and Figs. 17–22 confirm that DMOPSO is significantly better than most or even all of the MOPSOs in terms of performance on all test functions. For qualitative comparison, the resulting nondominated fronts (from a single run) of the six MOPSOs on all test func-

1288

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 38, NO. 5, OCTOBER 2008

TABLE V PARAMETER CONFIGURATIONS FOR NSGA-II [1] AND SPEA2 [2]

TABLE VI PARAMETER CONFIGURATIONS FOR DMOPSO

Fig. 29. Box plot of the hypervolume indicator (IH values) for DTLZ1–DTLZ3 (starting from the left) by algorithms 1–3 represented (in order): DMOPSO, NSGA-II, and SPEA2.

tions are shown in Figs. 23–28. The figures show that DMOPSO is able to find the well-extended and near-optimal Pareto fronts despite a large number of decision variables for test functions ZDT1–ZDT6. MOPSO comes up second, where it can produce quality Pareto fronts similar to those produced by DMOPSO, except for function DTLZ2; whereas EM-MOPSO is at third place for not able to produce the Pareto front for functions ZDT4 and DTLZ2. sMOPSO and NSPSO produce the worst Pareto fronts because they have difficulty in converging toward the true Pareto front, specifically for functions ZDT1–ZDT6 with high-dimensional decision spaces.

TABLE VII DISTRIBUTION OF IH VALUES TESTED BY USING THE MANN–WHITNEY RANK-SUM TEST. THE TABLE PRESENTS THE z- AND p-VALUES WITH RESPECT TO THE ALTERNATIVE HYPOTHESIS (I . E ., p < a = 0.05) FOR EACH PAIR OF DMOPSO AND A SELECTED MOEA. IN EACH CELL, BOTH VALUES ARE PRESENTED IN PARENTHESES: (z-VALUE, p-VALUE). THE DISTRIBUTION OF DMOPSO IS SIGNIFICANTLY DIFFERENT THAN THOSE SELECTED MOEAS UNLESS STATED [34]

C. Experiment 2: Performance Evaluation of DMOPSO Against the Selected MOEAs 1) Test Function Suite: The following tree benchmark test problems are chosen to evaluate the performance of DMOPSO with the selected MOEAs: DTLZ1, DTLZ2, and DTLZ3 [37].

DTLZ1 has seven decision variables, and the challenge is to converge to the hyperplane (global Pareto front) in all of the (117 − 1) local Pareto-optimal fronts. DTLZ2 involves

LEONG AND YEN: PSO-BASED MULTIOBJECTIVE OPTIMIZATION

1289

Fig. 30. Box plot based upon the additive binary epsilon indicator (Iε+ values) on test function DTLZ1 (algorithms 1–2 are referred to as NSGA-II and SPEA2, respectively).

12 decision variables, and its true Pareto front is on the first quadrant of a unit sphere. For DTLZ3, an MOEA or MOPSO has a high probability of getting stuck at any of the local Paretooptimal fronts because all local Pareto fronts are parallel to the global Pareto-optimal front. 2) Parameter Settings for the Second Experiment: The two selected MOEAs for performance comparison are NSGA-II [1] and SPEA2 [2]. Each algorithm, including DMOPSO, is set to perform 50 000 fitness function evaluations for DTLZ1 and DTLZ2, whereas 100 000 fitness evaluations are given to a much more difficult problem, DTLZ3. The parameter configurations for the selected MOEAs are summarized in Table V, whereas Table VI presents DMOPSO’s parameter configurations for each of the three test functions. Note that NSGA-II and SPEA2 were obtained from the PISA archive at the Swiss Federal Institute of Technology, Zurich, Switzerland [38]. For each experiment, 50 independent runs were conducted to collect the statistical results. 3) Simulation Results and Analysis: In Fig. 29, the box plots of the hypervolume indicator (IH value) show that DMOPSO obtains almost the same performance as NSGA-II for all test functions, whereas it is also shown to be lacking in performance than SPEA2 for DTLZ2 and DTLZ3. Table VII supports this observation that DMOPSO and NSGA-II show no difference in performance for DTLZ2 and DTLZ3. SPEA2 claims superiority in producing well-distributed and good Pareto-optimal fronts. Although Table VII indicates that the performances of NSGA-II and SPEA2 are slightly better than that of DMOPSO for DTLZ1, Fig. 30 shows that DMOPSO is incomparable, as defined in [35], to those two MOEAs because Iε+ (DMOPSO, X1−2 ) > 0 and Iε+ (X1−2 , DMOPSO) > 0. Observing the results in Table VIII, DMOPSO and NSGA-II are on equal ground in terms of performance, and DMOPSO and SPEA2 are incomparable. Similarly, Fig. 31 does not distinguish which algorithm performs better. Hence, we concluded that DMOPSO is incomparable with the selected MOEAs for DTLZ2. On the contrary, observation in Fig. 32 shows that DMOPSO weakly dominates NSGA-II [i.e., Iε+ (DMOPSO, X1 ) > 0 and Iε+ (DMOPSO, X1 ) Iε+ (X1 , DMOPSO)], and DMOPSO and SPEA-II are incomparable. Observation in Table VIII confirms that there is no difference in performance for the three algorithms for DTLZ3. The report in Table VIII shows incon-

TABLE VIII DISTRIBUTION OF Iε+ VALUES TESTED BY USING THE MANN–WHITNEY RANK-SUM TEST. THE TABLE PRESENTS THE z- AND p-VALUES WITH RESPECT TO THE ALTERNATIVE HYPOTHESIS (I . E., p < a = 0.05) FOR EACH PAIR OF DMOPSO AND A SELECTED MOEA. IN EACH CELL, BOTH VALUES ARE PRESENTED IN PARENTHESES: (z-VALUE, p-VALUE). DMOPSO IS REPRESENTED BY A, AND ALGORITHMS B1 AND B2 ARE REFERRED TO AS NSGA-II AND SPEA2, RESPECTIVELY. THE DISTRIBUTION OF DMOPSO IS SIGNIFICANTLY DIFFERENT THAN THOSE SELECTED MOEAS UNLESS STATED [34]

sistency with the findings in Fig. 32. This may due to the large standard deviations observed for DMOPSO in Figs. 29–32, where DMOPSO does not consistently produce better Paretooptimal front. In Figs. 33–35, the best nondominated sets of the three algorithms are presented. The figures show that DMOPSO is able to find quality Pareto-optimal fronts, which are at least comparable with NSGA-II, despite having DMOPSO produced slightly poorer results than SPEA2. The figures also show that SPEA2 produces the best quality Pareto-optimal fronts, i.e., a well-distributed and near-optimal Pareto front. Overall, we found that DMOPSO is competitive in terms of performance as compared with two state-of-the-art MOEAs, specifically NSGA-II and SPEA2. D. Experiment 3: Comparison in the Computational Cost By introducing the dynamic population approach, DMOPSO produces better performances overall as compared with the selected MOPSOs. However, it is essential to investigate whether the dynamic population approach will increase the computational complexity. Note that this experiment excluded the selected MOEAs because they are implemented in PISA, whereas the rest of the algorithms (DMOPSO and MOPSOs) are implemented in Matlab. Different implementations will produce unfair comparisons. In addition, it is known that PSO has a faster convergence speed than evolutionary algorithm; hence, it is also unfair to perform this experiment on MOEAs.

1290

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 38, NO. 5, OCTOBER 2008

Fig. 31. Box plot based upon the additive binary epsilon indicator (Iε+ values) on test function DTLZ2 (algorithms 1–2 are referred to as NSGA-II and SPEA2, respectively).

Fig. 32. Box plot based upon the additive binary epsilon indicator (Iε+ values) on test function DTLZ3 (algorithms 1–2 are referred to as NSGA-II and SPEA2, respectively).

Fig. 33. Pareto fronts produced by (a) DMOPSO, (b) NSGA-II, and (c) SPEA2 on test function DTLZ1.

Fig. 34. Pareto fronts produced by (a) DMOPSO, (b) NSGA-II, and (c) SPEA2 on test function DTLZ2.

This investigation simply compares the required number of fitness evaluations and time needed by DMOPSO and the selected MOPSOs to achieve the targeted generational distance [39] GD of 0.001 for the selected test problems. To avoid any MOPSO that could consume excessive computations to reach

the goal set in GD, a limit of 500 000 fitness evaluations is imposed as the stopping criterion. To obtain the running time, a Matlab function, tic and toc, is used to measure the time elapsed for each MOPSO. Each MOPSO performs 30 independent runs to collect the statistical results. All parameter settings

LEONG AND YEN: PSO-BASED MULTIOBJECTIVE OPTIMIZATION

1291

Fig. 35. Pareto fronts produced by (a) DMOPSO, (b) NSGA-II, and (c) SPEA2 on test function DTLZ3. TABLE IX AVERAGE NUMBER OF EVALUATIONS AND TIME REQUIRED PER RUN FOR ALL TEST PROBLEMS FROM ALL SELECTED ALGORITHMS AND DMOPSO TO ACHIEVE GD = 0.001

for the chosen MOPSOs are the same as those shown in Tables I and II. Table IX presents the average number of fitness evaluations and time needed per run for all of the selected MOPSOs and DMOPSO. Table IX shows that DMOPSO demands the least average number of fitness evaluations and average running time, as opposed to the selected MOPSOs, to reach the desired GD values for ZDT2, ZDT4, ZDT6, and DTLZ2. Except for ZDT1 and ZDT2, EM-MOPSO succeeds to find the Pareto front with the targeted GD value within 500 000 fitness evaluations and with least running time. Regarding the experiment on the time required, the results show that DMOPSO, OMOPSO, MOPSO, and EM-MOPSO require shorter running times compared with sMOPSO and NSPSO. Overall, it is observed that DMOPSO can save at least 8% of the required computational cost in terms of the number of fitness evaluation and time required, respectively, for some test problems. V. D ISCUSSION AND C ONCLUSION In this paper, a multiple-swarm MOPSO algorithm, DMOPSO, is proposed. The proposed DMOPSO incorporates

the following features: a cell-based rank density estimation scheme to quickly update the location of the new particles in the objective space and to provide easy access to the rank and density information of the particles; a population-growing strategy that adaptively grows new particles with enhanced exploration and exploitation capabilities; a population-declining strategy to balance and control the dynamic population size; and adaptive local archives to improve the selection of group leaders to produce a better distributed Pareto front associated with each swarm. A comparative study of DMOPSO and five state-of-the-art MOPSOs on six benchmark test problems is presented. The results of applying two performance metrics clearly indicate that DMOPSO is highly competitive and even outperforms most of the selected MOPSOs. In addition, qualitative results also show that DMOPSO has the ability to produce relatively better Pareto fronts compared with most of the selected MOPSOs for all six benchmark test functions. By comparing the computational cost, it shows that DMOPSO requires less fitness evaluations and time than the selected MOPSOs on some of the selected benchmark test problems. On comparing the performance of

1292

IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS—PART B: CYBERNETICS, VOL. 38, NO. 5, OCTOBER 2008

DMOPSO against NSGA-II and SPEA2 on the selected threeobjective test problems, DMOPSO shows competitive results despite not being as robust as desired. In fact, the dynamic population strategy has contributed in improved performance. The reasons are as follows. First, the dynamic population strategy provides some flexibility in preserving good particles and removing those that will not contribute to the search process for the following iterations. Second, the design guarantees to grow new potential particles that will either improve the search process or land on unexplored regions to discover better solutions. This will speed up the process in finding better solutions and indirectly save computational cost. Third, the multiple-swarm approach provides some degrees of local search. This greatly enhances the quality of convergence toward optimal Pareto front. To avoid the excessive use of local search, an adaptive local archive is incorporated to maintain the diversity within each swarm, at least in a local sense. One weakness of DMOPSO is the parameter settings and dealing with the question of how to optimally choose the parameters. We suggest the fixing of some of the parameters such as lnp = 1, rud = 0.7, rld = 0.02, and ppv = 10. For the grid scale K, we suggest that we start at 100 first and then tuning the value up or down, depending on the resolution of the resulting Pareto front needed. The setting of parameter Ka depends on the number of subswarms. If the number of swarms is high, then parameter Ka can be tuned down and vice versa. Finally, parameters r3 , unp, and selection ratio are interdependent. Currently, these parameters are selected ad hoc. There will be follow-up research in the near future to improve the DMOPSO design with lesser user-defined parameters and, at the same time, improve the reliability in producing good solutions. Certain design elements in DMOPSO will be revised, such as the population-growing strategy that require less or no user-designed parameters; redesign the adaptive local archives by employing the local crowding distance technique proposed in NSGA-II [1], which will eliminate the Ka parameter; remove the stmax parameter by using a random number with equal probability to determine when to exchange the global leaders among the particles; and redesign (14) (likelihood to remove particles) such that the decision for particles to be removed will depend only on the likelihood equation. Several publications [40]–[42] have proved successful in applying PSO to solve combinatorial optimization problems like the multiobjective knapsack or the traveling salesman problem. It will be interesting to study how well DMOPSO will handle these combinatorial optimization problems. ACKNOWLEDGMENT The authors would like to thank the four anonymous reviewers for their constructive comments that help enhance the quality and presentation of this paper. R EFERENCES [1] K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: NSGA-II,” IEEE Trans. Evol. Comput., vol. 6, no. 2, pp. 182–197, Apr. 2002.

[2] E. Zitzler, M. Laumanns, and L. Thiele, “SPEA2: Improving the strength Pareto evolutionary algorithm,” Swiss Federal Inst. Technol., Zurich, Switzerland, TIK-Rep. 103, 2001. [3] J. D. Knowles and D. W. Corne, “Approximating the nondominated front using the Pareto archived evolution strategy,” Evol. Comput., vol. 8, no. 2, pp. 149–172, Jun. 2000. [4] X. Hu and R. C. Eberhart, “Multiobjective optimization using dynamic neighborhood particle swarm optimization,” in Proc. Congr. Evol. Comput., Honolulu, HI, 2002, pp. 1677–1681. [5] T. Ray and K. M. Liew, “A swarm metaphor for multiobjective design optimization,” Eng. Optim., vol. 34, no. 2, pp. 141–153, Mar. 2002. [6] L. B. Zhang, C. G. Zhou, X. H. Liu, Z. Q. Ma, M. Ma, and Y. C. Liang, “Solving multi objective problems using particle swarm optimization,” in Proc. Congr. Evol. Comput., Canberra, Australia, 2003, pp. 2400–2405. [7] S. Mostaghim and J. Teich, “Strategies for finding good local guides in multi-objective particle swarm optimization,” in Proc. IEEE Swarm Intell. Symp., Indianapolis, IN, 2003, pp. 26–33. [8] J. Branke and S. Mostaghim, “About selecting the personal best in multiobjective particle swarm optimization,” in Proc. Conf. Parallel Probl. Solving Nature, Reykjavik, Iceland, 2006, pp. 523–532. [9] X. Hu, R. C. Eberhart, and Y. Shi, “Particle swarm with extended memory for multiobjective optimization,” in Proc. IEEE Swarm Intell. Symp., Indianapolis, IN, 2003, pp. 193–197. [10] J. E. Fieldsand and S. Singh, “A multi-objective algorithm based upon particle swarm optimization, an efficient data structure and turbulence,” in Proc. U.K. Workshop Comput. Intell., Birmingham, U.K., 2002, pp. 37–44. [11] S. Mostaghim and J. Teich, “The role of ε-dominance in multi-objective particle swarm optimization methods,” in Proc. Congr. Evol. Comput., Canberra, Australia, 2003, pp. 1764–1771. [12] X. Li, “A non-dominated sorting particle swarm optimizer for multiobjective optimization,” in Proc. Genetic Evol. Comput. Conf., 2003, pp. 37–48. [13] C. A. Coello Coello and M. S. Lechuga, “MOPSO: A proposal for multiple objective particle swarm optimization,” in Proc. Congr. Evol. Comput., Honolulu, HI, 2002, pp. 1051–1056. [14] C. A. Coello Coello, G. Toscano Pulido, and M. S. Lechuga, “Handling multiple objectives with particle swarm optimization,” IEEE Trans. Evol. Comput., vol. 8, no. 3, pp. 256–279, Jun. 2004. [15] M. R. Sierra and C. A. Coello Coello, “Improving PSO-based multiobjective optimization using crowding, mutation and ε-dominance,” in Proc. Evol. Multi-Criterion Optim. Conf., Guanajuato, Mexico, 2005, pp. 505–519. [16] M. J. Reddy and D. N. Kumar, “An efficient multi-objective optimization algorithm based on swarm intelligence for engineering design,” Eng. Optim., vol. 39, no. 1, pp. 49–68, Jan. 2007. [17] G. Toscano Pulido and C. A. Coello Coello, “Using clustering techniques to improve the performance of a particle swarm optimizer,” in Proc. Genetic Evol. Comput. Conf., Seattle, WA, 2004, pp. 225–237. [18] X. Li, “Better spread and convergence: Particle swarm multiobjective optimization using the maximin fitness function,” in Proc. Genetic Evol. Comput. Conf., Seattle, WA, 2004, pp. 117–128. [19] S. Mostaghim and J. Teich, “Covering Pareto-optimal fronts by subswarms in multi-objective particle swarm optimization,” in Proc. Congr. Evol. Comput., Portland, OR, 2004, pp. 1404–1411. [20] R. L. Becerra, C. A. Coello Coello, A. G. Hernández-Diaz, R. Caballero, and J. Molina, “Alternative technique to solve hard multi-objective optimization problems,” in Proc. Genetic Evol. Comput. Conf., London, U.K., 2007, pp. 757–764. [21] J. Arabas, Z. Michalewicz, and J. Mulawka, “GAVaPS—A genetic algorithm with varying population size,” in Proc. Congr. Evol. Comput., Orlando, FL, 1994, pp. 73–74. [22] N. Zhuang, M. Benten, and P. Cheung, “Improved variable ordering of BBDS with novel genetic algorithm,” in Proc. IEEE Int. Symp. Circuits Syst., Atlanta, GA, 1996, pp. 414–417. [23] J. Grefenstette, “Optimization of control parameters for genetic algorithms,” IEEE Trans. Syst., Man, Cybern., vol. SMC-16, no. 1, pp. 122– 128, Jan. 1986. [24] Alander, “On optimal population size of genetic algorithms,” in Proc. IEEE Int. Conf. Comput. Syst. Softw. Eng., Silver Spring, MD, 1992, pp. 65–70. [25] K. C. Tan, T. H. Lee, and E. F. Khor, “Evolutionary algorithms with dynamic population size and local exploration for multiobjective optimization,” IEEE Trans. Evol. Comput., vol. 5, no. 6, pp. 565–588, Dec. 2001. [26] G. G. Yen and H. Lu, “Dynamic multiobjective evolutionary algorithm: Adaptive cell-based rank and density estimation,” IEEE Trans. Evol. Comput., vol. 7, no. 3, pp. 253–274, Jun. 2003.

LEONG AND YEN: PSO-BASED MULTIOBJECTIVE OPTIMIZATION

[27] J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” in Proc. Int. Joint Conf. Neural Netw., Perth, Australia, 1995, pp. 1942–1948. [28] Y. Shi and R. C. Eberhart, “A modified particle swarm optimizer,” in Proc. Congr. Evol. Comput., Anchorage, AK, 1998, pp. 69–73. [29] H. Lu and G. G. Yen, “Rank-density-based multiobjective genetic algorithm and benchmark test function study,” IEEE Trans. Evol. Comput., vol. 7, no. 4, pp. 325–343, Aug. 2003. [30] C. A. Coello Coello and E. M. Montes, “Constraint-handling in genetic algorithms through the use of dominance-based tournament selection,” Adv. Eng. Inf., vol. 16, no. 3, pp. 193–203, Jul. 2002. [31] J. E. Alvarez-Benitez, R. M. Everson, and J. E. Fieldsend, “A MOPSO algorithm based exclusively on Pareto dominance concepts,” in Proc. Evol. Multi-Criterion Optim. Conf., Guanajuato, Mexico, 2005, pp. 459–473. [32] J. D. Knowles and D. W. Corne, “M-PAES: A memetic algorithm for multiobjective optimization,” in Proc. Congr. Evol. Comput., La Jolla, CA, 2000, pp. 325–332. [33] E. Zitler, “Evolutionary algorithms for multiobjective optimization: Methods and applications,” Ph.D dissertation, Swiss Federal Inst. Technol., Zurich, Switzerland, 1999. [34] J. D. Knowles, L. Thiele, and E. Zitzler, “A tutorial on performance assessment of stochastic multiobjective optimizers,” Comput. Eng. Netw. Lab., ETH Zurich, Zurich, Switzerland, pp. 1–35, TIK-Rep. No. 214, Feb. 2006. (revised version). [35] E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. G. da Fonseca, “Performance assessment of multiobjective optimizers: An analysis and review,” IEEE Trans. Evol. Comput., vol. 7, no. 2, pp. 117–132, Apr. 2003. [36] E. Zitzler, K. Deb, and L. Thiele, “Comparison of multiobjective evolutionary algorithms: Empirical results,” Evol. Comput., vol. 8, no. 2, pp. 173–195, 2000. [37] K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, “Scalable test problems for evolutionary multi-objective optimization,” Swiss Federal Inst. Technol., Zurich, Switzerland, Tech. Rep. 112, 2001. [38] PISA website. [Online]. Available: http://www.tik.ee.ethz.ch/pisa/ [39] D. A. Van Veldhuizen and G. B. Lamount, “Multiobjective evolutionary algorithm research: A history and analysis,” Dept. Elect. Comput. Eng., Air Force Inst. Technol., Wright-Patterson AFB, OH, Tech. Rep. TR-98-03, 1998. [40] C. Grosan, C. Dumitrescu, and D. Lazar, “A particle swarm optimization for solving 0/1 knapsack problem,” in Proc. Int. Conf. Comput. Commun., Oradea, Romania, 2004, pp. 200–204. [41] H. S. Lope and L. S. Coelho, “Particle swarm optimization with fast local search for the blind traveling salesman problem,” in Proc. Int. Conf. Hybrid Intell. Syst., Sao Paulo, Brazil, 2005, pp. 245–250. [42] S. A. Zonouz, J. Habibi, and M. Saniee, “A hybrid PS-based optimization algorithm for solving traveling salesman problem,” in Proc. IEEE Int. Symp. Front. Netw. Appl., Vienna, Austria, 2006, pp. 245–250.

1293

Wen-Fung Leong received the B.S. and M.S. degrees in electrical engineering from Oklahoma State University, Stillwater, in 2000 and 2002, respectively, where she is currently working toward the Ph.D. degree in electrical engineering in the School of Electrical and Computer Engineering. Her current research interests include feature extraction, neural networks, and evolutionary computation. Ms. Leong is a Student Member of the IEEE Computational Intelligence Society.

Gary G. Yen (S’87–M’88–SM’97) received the Ph.D. degree in electrical and computer engineering from the University of Notre Dame, Notre Dame, IN, in 1992. He is currently a Professor with the School of Electrical and Computer Engineering, Oklahoma State University, Stillwater. Before he joined OSU in 1997, he was with the Structure Control Division, U.S. Air Force Research Laboratory, Albuquerque, NM. His research has been supported by the U.S. Department of Defense, Department of Energy, Environmental Protection Agency, the National Aeronautics and Space Administration, the National Science Foundation, and process industry. His research interests include intelligent control, computational intelligence, conditional health monitoring, signal processing, and their industrial/defense applications. Dr. Yen was an Associate Editor for the IEEE TRANSACTIONS ON NEURAL NETWORKS, IEEE CONTROL SYSTEMS MAGAZINE, IEEE TRANSACTIONS ON C ONTROL S YSTEMS T ECHNOLOGY , IEEE T RANSACTIONS ON S YSTEMS , MAN, AND CYBERNETICS—PART B, and Automatica. He is currently serving as an Associate Editor for the IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION and the IFAC journal Mechatronics. He served as the General Chair for the 2003 IEEE International Symposium on Intelligent Control held in Houston, TX, and the 2006 IEEE World Congress on Computational Intelligence held in Vancouver, Canada. He served as Vice President for the Technical Activities, IEEE Computational Intelligence Society, and is the Founding Editor-in-Chief of the IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE.