Content area
(ProQuest: ... denotes formulae and/or non-USASCII text omitted; see image)
Evolutionary algorithms (EAs) are being routinely applied for a variety of optimization tasks, and real-parameter optimization in the presence of constraints is one such important area. During constrained optimization EAs often create solutions that fall outside the feasible region; hence a viable constraint-handling strategy is needed. This paper focuses on the class of constraint-handling strategies that repair infeasible solutions by bringing them back into the search space and explicitly preserve feasibility of the solutions. Several existing constraint-handling strategies are studied, and two new single parameter constraint-handling methodologies based on parent-centric and inverse parabolic probability (IP) distribution are proposed. The existing and newly proposed constraint-handling methods are first studied with PSO, DE, GAs, and simulation results on four scalable test-problems under different location settings of the optimum are presented. The newly proposed constraint-handling methods exhibit robustness in terms of performance and also succeed on search spaces comprising up-to ...... variables while locating the optimum within an error of ....... The working principle of the IP based methods is also demonstrated on (i) some generic constrained optimization problems, and (ii) a classic 'Weld' problem from structural design and mechanics. The successful performance of the proposed methods clearly exhibits their efficacy as a generic constrained-handling strategy for a wide range of applications.
http://crossmark.crossref.org/dialog/?doi=10.1007/s10589-015-9752-6&domain=pdf
Web End = http://crossmark.crossref.org/dialog/?doi=10.1007/s10589-015-9752-6&domain=pdf
Web End = http://crossmark.crossref.org/dialog/?doi=10.1007/s10589-015-9752-6&domain=pdf
Web End = http://crossmark.crossref.org/dialog/?doi=10.1007/s10589-015-9752-6&domain=pdf
Web End = http://crossmark.crossref.org/dialog/?doi=10.1007/s10589-015-9752-6&domain=pdf
Web End = Comput Optim Appl (2015) 62:851890
DOI 10.1007/s10589-015-9752-6
http://crossmark.crossref.org/dialog/?doi=10.1007/s10589-015-9752-6&domain=pdf
Web End = http://crossmark.crossref.org/dialog/?doi=10.1007/s10589-015-9752-6&domain=pdf
Web End = http://crossmark.crossref.org/dialog/?doi=10.1007/s10589-015-9752-6&domain=pdf
Web End = http://crossmark.crossref.org/dialog/?doi=10.1007/s10589-015-9752-6&domain=pdf
Web End = Feasibility preserving constraint-handling strategies for real parameter evolutionary optimization
Nikhil Padhye1 Pulkit Mittal2 Kalyanmoy Deb3
Received: 22 May 2014 / Published online: 9 May 2015
Springer Science+Business Media New York 2015
Abstract Evolutionary algorithms (EAs) are being routinely applied for a variety of optimization tasks, and real-parameter optimization in the presence of constraints is one such important area. During constrained optimization EAs often create solutions that fall outside the feasible region; hence a viable constraint-handling strategy is needed. This paper focuses on the class of constraint-handling strategies that repair infeasible solutions by bringing them back into the search space and explicitly preserve feasibility of the solutions. Several existing constraint-handling strategies are studied, and two new single parameter constraint-handling methodologies based on parent-centric and inverse parabolic probability (IP) distribution are proposed. The existing and newly proposed constraint-handling methods are rst studied with PSO, DE, GAs, and simulation results on four scalable test-problems under different location settings of the optimum are presented. The newly proposed constraint-handling methods exhibit robustness in terms of performance and also succeed on search spaces comprising up-to 500 variables while locating the optimum within an error of 1010.
The working principle of the IP based methods is also demonstrated on (i) some generic
B Nikhil Padhye
[email protected] Pulkit Mittal
[email protected] Kalyanmoy Deb
1 Department of Mechanical Engineering, Massachusetts Institute of Technology,
Cambridge, MA 02139, USA2 Department of Electrical Engineering, Indian Institute of Technology Kanpur,
Kanpur 208016, UP, India3 Department of Electrical and Computer Engineering, Michigan State University,
East Lansing, MI 48824, USA
123
852 N. Padhye et al.
constrained optimization problems, and (ii) a classic Weld problem from structural design and mechanics. The successful performance of the proposed methods clearly exhibits their efcacy as a generic constrained-handling strategy for a wide range of applications.
Keywords Constraint-handling Nonlinear and constrained optimization Particle
swarm optimization Real-parameter genetic algorithms Differential evolution
1 Introduction
Optimization problems are wide-spread in several domains of science and engineering. The usual goal is to minimize or maximize some pre-dened objective(s). Most of the real-world scenarios place certain restrictions on the variables of the problem i.e. the variables need to satisfy certain pre-dened constraints to realize an acceptable solution.
The most general form of a constrained optimization problem (with inequality constraints, equality constraints and variable bounds) can be written as a nonlinear programming (NLP) problem:
Minimize f (
x)
Subject to gj (
hk(
x) = 0, k = 1, . . . , K
x(L)i xi x(U)i, i = 1, . . . , n.
x is a vector
of size n), J greater-than-equal-to type inequality constraints (less-than-equal-to can be expressed in this form by multiplying both sides by 1), and K equality-type
constraints. The problem variables xis are bounded by the lower (x(L)i) and upper (x(U)i) limits. When only the variable bounds are specied then the constraint-handling strategies are often termed as the boundary-handling methods.1
In classical optimization, the task of constraint-handling has been addressed in a variety of ways: (i) using penalty approach developed by Fiacoo and McCormick [13], which degrades the function value in the regions outside the feasible domain,(ii) using barrier methods which operate in a similar fashion but strongly degrade the function values as the solution approaches a constraint boundary from inside the feasible space, (iii) performing search in the feasible directions using methods such gradient projection, reduced gradient and Zoutendijks approach [28] (iv) using the augmented Lagrangian formulation of the problem, as commonly done in linear programming and sequential quadratic programming (SQP). For a detailed account on
1 For the rest of the paper, by constraint-handling we imply tackling all of the following: variable bounds, inequality constraints and equality constraints. And, by a feasible solution it is implied that the solution
satises all the variable bounds, inequality constraints, and equality constraints. The main contribution of
the paper is to propose an efcient constraint-handling method that operates and generates only feasible
solutions during optimization.
The NLP problem dened above contains n decision variables (i.e.
x) 0, j = 1, . . . , J
(1)
123
Feasibility preserving constraint-handling strategies 853
these methods along with their implementation and convergence characteristics the reader is referred to [5,14,22]. The classical optimization methods reliably and effectively solve convex constrained optimization problems while ensuring convergence and therefore widely used in such scenarios. However, same is not true in the presence of non-convexity. The goal of this paper is to address the issue of constraint-handling for evolutionary algorithms in real-parameter optimization, without any limitations to convexity or a special form of constraints or objective functions.
In context to the evolutionary algorithms the constraint-handling has been addressed by a variety of methods; including borrowing of the ideas from the classical techniques. These include (i) use of penalty functions to degrade the tness values of infeasible solutions such that the degraded solutions are given less emphasis during the evolutionary search. A common challenge in employing such penalty methods arises from choosing an appropriate penalty parameter (R) that strikes the right balance between the objective function value, the amount of constraint violation and the associated penalty. Usually, in EA studies, a trial-and-error method is employed to estimate R. A study [7] in 2000 suggested a parameter-less approach of implementing the penalty function concepts for population-based optimization method. A recent bi-objective method [4] was reported to nd the appropriate R values adaptively during the optimization process. Other studies [26,27] have employed the concepts of multi-objective optimization by simultaneously considering the minimization of the constraint violation and optimization of the objective function, (ii) use of feasibility preserving operators, for example, in [15] specialized operators in the presence of linear constraints were proposed to create new and feasible-only individuals from the feasible parents. In another example, generation of feasible child solutions within the variable bounds was achieved through Simulated Binary Crossover (SBX) [6] and polynomial mutation operators [8]. The explicit feasibility of child solutions was ensured by redistributing the probability distribution function in such a manner that the infeasible regions were assigned a zero probability for child-creation [7]. Although explicit creation of feasible-only solutions during an EA search is an attractive proposition, but it may not be possible always since generic crossover or mutation operators or other standard EAs do not gaurantee creation of feasible-only solutions, (iii) deployment of repair strategies that bring an infeasible solution back into the feasible domain. Recent studies [3,10,11,19] investigated the issue of constraint-handling through repair techniques in context to PSO and DE, and showed that the repair mechanisms can introduce a bias in the search and hinder exploration. Several repair methods proposed in context PSO [1,12,16] exploit the information about location of the optimum and fail to perform when the location of optimum changes [17]. These issues are universal and often encountered with all EAs (as shown in later sections). Furthermore, the choice of the evolutionary optimizer, the constraint-handling strategy, and the location of the optima with respect to the search space, all play an important role in the optimization task. To this end, authors have realized a need for a reliable and effective repair-strategy that explicitly preserves feasibility. An ideal evolutionary optimizer (evolutionary algorithm and its constrained-handling strategy) should be robust in terms of nding the optimum, irrespective of the location of the optimal location in the search space. In rest of the paper, the term constraint-handling strategy refers to explicit feasibility preserving repair techniques.
123
854 N. Padhye et al.
First we review the existing constraint-handling strategies and then propose two new constraint-handling schemes, namely, Inverse Parabolic Methods (IPMs). Several existing and newly proposed constrained-handling strategies are rst tested on a class of benchmark unimodal problems with variable bound constraints. Studying the performance of constraint-handling strategies on problems with variable bounds allows us to gain better understanding into the operating principles in a simplistic manner. Particle Swarm Optimization, Differential Evolution and real-coded Genetic Algorithms are chosen as evolutionary optimizers to study the performance of different constraint-handling strategies. By choosing different evolutionary optimizers, better understanding on the functioning of constraint-handlers embedded in the evolutionary frame-work can be gained. Both, the search algorithm and constraint-handling strategy must operate efciently and synergistically in order to successfully carry out the optimization task. It is shown that the constraint-handling methods possessing inherent pre-disposition; in terms of bringing infeasible solutions back into the specic regions of the feasible domain, perform poorly. Deterministic constraint-handling strategies such as those setting the solutions on the constraint boundaries result in the loss of population diversity. On the other hand, random methods of bringing the solutions back into the search space arbitrarily; lead to complete loss of all useful information carried by the solutions. A balanced approach that utilizes the useful information from the solutions and brings them back into the search space in a meaningful way is desired. The newly proposed IPMs are motivated by these considerations. The stochastic and adaptive components of IPMs (utilizing the information of the solutions feasible and infeasible locations), and a user-dened parameter () render them quite effective.
The rest of the paper is organized as follows: Sect. 2 reviews existing constraint-handling techniques commonly employed for problems with variable bounds. Section 3 provides a detailed description on two newly proposed IPMs. Section 4 provides a description on the benchmark test problems and several simulations performed on PSO, GAs and DE with different constraint-handling techniques. Section 5 considers optimization problems with larger number of variables. Section 6 shows the extension and applicability of proposed IPMs for generic constrained problems. Finally, conclusions and scope for future work are discussed in Sect. 7.
2 Feasibility preserving constraint-handling approaches for
optimization problems with variable bounds
Several constraint-handling strategies have been proposed to bring solutions back into the feasible region when constraints manifest as variable bounds. Some of these strategies can also be extended in presence of general constraints. An exhaustive recollection and comparison of all the constraint-handling techniques is beyond the scope of this study. Rather, we focus our discussions on the popular and representative constraint-handling techniques.
The existing constraint-handling methods for problems with variable bounds can be broadly categorized into two groups: Group A techniques that perform feasibility check variable wise, and Group B techniques that perform feasibility check vector-wise. According to Group A techniques, for every solution, each variable is tested for
123
Feasibility preserving constraint-handling strategies 855
Fig. 1 Variable-wise random
approach for handling bounds
X
its feasibility with respect to its supplied bounds and made feasible if the corresponding bound is violated. Here, only the variables violating their corresponding bounds are altered, independently, and other variables are kept unchanged. According to Group B techniques, if a solution (represented as a vector) is found to violate any of the variable bounds, it is brought back into the search space along a vector direction into the feasible space. In such cases, the variables that explicitly do not violate their own bounds may also get modied.
It is speculated that for variable-wise separable problems, that is, problems where variables are not linked to one another, techniques belonging to Group A are likely to perform well. However, for the problems with high correlation amongst the variables (usually referred to as linked-problems), Group B techniques are likely to be more useful. Next, we provide description of these constraint-handling methods in detail.2
2.1 Random approach
This is one of the simplest and commonly used approaches for handling boundary violations in EAs [3]. This approach belongs to Group A. Each variable is checked for a boundary violation and if the variable bound is violated by the current position, say xci, then xci is replaced with a randomly chosen value yi in the range [x(L)i, x(U)i],
as follows:
yi = random x(L)i, x(U)i . (2)
Figure 1 illustrates this approach. Due to the random choice of the feasible location, this approach explicitly maintains diversity in the EA population.
2.2 Periodic Approach
This strategy assumes a periodic repetition of the objective function and constraints with a period p = x(U)i x(L)i. This is carried out by mapping a violated variable xci
2 The implementation of several strategies as C codes can be obtained by emailing [email protected] or [email protected].
L
new
U XC
123
856 N. Padhye et al.
L
U
L U
F(x)
(3)
In the above equation, % refers to the modulo operator. Figure 2 describes the periodic approach. The above operation brings back an infeasible solution in a structured manner to the feasible region. In contrast to the random method, the periodic approach is too methodical and it is unclear whether such a repair mechanism is supportive of preserving any meaningful information of the solutions that have created the infeasible solution. This approach belongs to Group A.
2.3 SetOnBoundary approach
As the name suggests, according to this strategy a violated variable is reset on the bound of the variable which it violates.
yi =
F(x)
F(x)
Y
X
L
p
Fig. 2 Variable-wise periodic approach for handling bounds
in the range [x(L)i, x(U)i] to yi, as follows:
yi =
x(U)i x(L)i xci %p, if xci < x(L)i,
x(L)i + xci x(U)i %p, if xci > x(U)i,
U
x(L)i, if xci < x(L)i,
x(U)i, if xci > x(U)i.
(4)
Clearly this approach forces all violated solutions to lie on the lower or on the upper boundaries, as the case may be. Intuitively, this approach will work well on the problems when the optimum of the problem lies exactly on one of the variable boundaries. This approach belongs to Group A.
2.4 Exponentially conned (Exp-C) approach
This method was proposed in [1]. According to this approach, a particle is brought back inside the feasible search space variable-wise in the region between its old position and
123
Feasibility preserving constraint-handling strategies 857
Fig. 3 Variable-wise
exponentially approach (Exp-C)
for handling bounds
U
the violated bound. The new location is created in such a manner that higher sampling probabilities are assigned to the regions near the violated boundary. The developers suggested the use of an exponential probability distribution, shown in Fig. 3. The motivation of this approach is based on the hypothesis that a newly created infeasible point violates a particular variable boundary because the optimum solution lies closer to that variable boundary. Thus, this method will probabilistically create more solutions closer to the boundaries, unless the optimum lies well inside the restricted search space. This approach belongs to Group A.
Assuming that the exponential distribution is p(xi) = A exp(|xi x pi|), the value
of A can be obtained by integrating the probability from xi = x pi to xi = x(B)i
(where B = L or U, as the case may be). Thus, the probability distribution is given
as p(x) = exp(|xi x pi|)/(exp(|x(B)i x pi|) 1). For any random number r within [0, 1], the feasible solution is calculated as follows:
yi =
x pi ln 1 + r exp x pi x(L)i 1 , if xi < x(L)i, x pi + ln 1 + r exp x(U)i x pi 1 , if xi > x(U)i.
2.5 Exponential spread (Exp-S) approach
This is a variation of the above approach, in which, instead of conning the probability to lie between x pi and the violated boundary, the exponential probability is spread over the entire feasible region, that is, the probability is distributed from lower boundary to the upper boundary with an increasing probability towards the violated boundary.
This requires replacing x pi with x(U)i (when the lower boundary is violated) or x(L)i (when the upper boundary is violated) in the Eq. 5 as follows:
yi =
P(x)
L
Xp
XC
(5)
x(U)i ln 1 + r exp x(U)i x(L)i 1 , if xi < x(L)i, x(L)i + ln 1 + r exp x(U)i x(L)i 1 , if xi > x(U)i.
(6)
The probability distribution is shown in Fig. 4. This approach also belongs to Group A.
123
858 N. Padhye et al.
Fig. 4 Variable-wise
exponentially approach (Exp-S)
for handling bounds
xc
P(x)
L
xp
U
Fig. 5 Vector based SHR.
strategy for handling bounds
XC
Xnew
U
L
Xnot
2.6 Shrink approach
This is a vector-wise approach and belongs to Group B in which the violated solution is set on the intersection point of the line joining the parent point (
xnot), child point
(
xc), and the violated boundary. Mathematically, the mapped vector
y is created as
follows:
xc xnot), (7)
where is computed as the minimum of all positive values of intercept (x(L)i
xi,not)/(xci xi,not) for a violated boundary x(L)i and (x(U)i xi,not)/(xci xi,not)
for a violated boundary x(U)i. This operation is shown in Fig. 5. In the case shown, needs to be computed for variable bound x(U)2 only.
3 Proposed inverse parabolic (IP) constraint-handling methods
The exponential probability distribution function described in the previous section brings violated solutions back into the allowed range variable-wise, but ignores the distance of the violated solution xci with respect to the violated boundary. The distance from the violated boundary carries useful information for remapping the violated solution into the feasible region. One way to utilize this distance information is to bring solutions back into the allowed range with a higher probability closer to the
y = xnot + (
123
Feasibility preserving constraint-handling strategies 859
x2
x2U
P(x)
Infeasible
d
v
d
u
d
x c p
x
p
v
Infeasible
u
d
x2L
Feasible
x1L
x1U
x
1
Fig. 6 Vector based inverse parabolic methods
boundary, when the fallen-out distance (dv, as shown in Fig. 6) is small. In situations, when points are too far outside the allowable range, that is, the fallen-out distance dv is large, particles are brought back more uniformly inside the feasible range. Importantly, when the fallen-out distance dv is small (meaning that the violated child solution is close to the variable boundary), the repaired point is also close to the violated boundary but in the feasible side. Therefore, the nature of the exponential distribution should become more and more like a uniform distribution as the fallen-out distance dv becomes large.
Let us consider Fig. 6 which shows a violated solution
xc and its parent solution
x p. Let dp = xc xp denote the distance between the violated solution and the
parent solution. Let
v and
u be the intersection points of the line joining
xc and
x p with
the violated boundary and the non-violated boundary, respectively. The corresponding distances of these two points from
xc are dv and du, respectively. Clearly, the violated distance is dv = xc v . We now dene an inverse parabolic probability distribution
function from
xc along the direction (
x p xc) as:
p(d) =
A(d dv)2 + 2d2v
, dv d a, (8)
where a is the upper bound of d allowed by the constraint-handling scheme (we dene a later) and is a pre-dened parameter. By calculating and equating the cumulative probability equal to one, we nd:
123
860 N. Padhye et al.
dv tan1 advdv
The probability is maximum at d = dv (at the violated boundary) and reduces as
the solution enters the allowable range. Although this characteristic was also present in the exponential distribution, the above probability distribution is also a function of violated distance dv, which acts like a variance to the probability distribution. If dv is small, then the variance of the distribution is small, thereby resulting in a localized effect of creating a mapped solution. For a random number r [0, 1], the distance of
the mapped solution from
xc in the allowable range [dv, du] is given as follows:
d = dv + dv tan r tan1
The corresponding mapped solution is as follows:
y = xc + d ( x p xc). (10)
Note that the IP method makes a vector-wise operation and is sensitive to the relative locations of the infeasible solution, the parent solution, and the violated boundary.
The parameter has a direct external effect of inducing small or large variance to the above probability distribution. If is large, the variance is large, thereby having uniform-like distribution. Later we shall study the effect of the parameter . A value 1.2 is found to work well in most of the problems and is recommended. Next,
we describe two particular constraint-handling schemes employing this probability distribution.
3.1 Inverse parabolic conned (IP-C) method
In this approach, the probability distribution is conned between d [dv, dp], thereby
making a = dp. Here, a mapped solution
y lies strictly between violated boundary
3.2 Inverse parabolic spread (IP-S) method
Here, the mapped solution is allowed to lie in the entire feasible range between
x p xc), but more emphasis is given on relocating the child near the
violated boundary. The solution can be found by using Eqs. 9 and 10, and by setting a = du.
4 Results and discussions
In this study, rst we choose four standard scalable unimodal test functions (in presence of variable bounds): Ellipsoidal (Felp), Schwefel (Fsch), Ackley (Fsch), and Rosenbrock (Fros), described as follows:
along the vector (
A =
.
a dv
dv
. (9)
location (
v) and the parent (
x p).
u
v and
123
Feasibility preserving constraint-handling strategies 861
Felp =
n
i=1
ix2i (11)
2
Fsch =
n
i=1
i
j=1
x j
(12)
0.2
1 n
i=n
i=1
x2i
Fack = 20exp
exp
1 n
n
i=1
cos(2xi)
+ 20 + e (13)
x2i xi+1 2 + (xi 1)2 (14)
In the unconstrained space, Felp, Fsch and Fack have a minimum at xi = 0, whereas
Fros has a minimum at xi = 1. All functions have minimum value F = 0. Felp is
the only variable separable problem. Fros is a challenging test problem that has a ridge which poses difculty for several optimizers. In all the cases the number of variables is chosen to be n = 20.
For each test problem three different scenarios corresponding to the relative location of the optimum with respect to the allowable search range are considered. This is done by selecting different variable bounds, as follows:
On the Boundary: Optimum is exactly on one of the variable boundaries (for Felp,
Fsch and Fack xi [0, 10], and for Fros, xi [1, 10]),
At the Center: Optimum is at the center of the allowable range (for Felp, Fsch and Fack xi [10, 10], and for Fros, xi [8, 10]), and
Close to Boundary: Optimum is near the variable boundary, but not exactly on the boundary (for Felp, Fsch and Fack xi [1, 10], and for Fros,
xi [0, 10]).
These three scenarios are shown in the Fig. 7 for a two-variable problem having variable bounds: x(L)i = 0 and x(U)i = 10. Although in practice, the optimum can lie
anywhere in the allowable range, the above three scenarios pose adequate representation of different possibilities that may exist in practice.
For each test problem, the population is initialized uniformly in the allowable range. We count the number of function evaluations needed for the algorithm to nd a solution close to the known optimum solution and we call this our evaluation criterion S. Choosing a high accuracy (i.e. small value of S) as the termination criteria minimizes the chances of locating the optimum due to random effects, and provides a better insight into the behavior of a constraint-handling mechanism.
To eliminate the random effects and gather results of statistical importance, each algorithm is tested on a problem 50 times (each run starting with a different initial population). A particular run is terminated if the evaluation criterion S is met (noted as a successful run), or the number of function evaluations exceeds one million (noted as an unsuccessful run). If only a few out of the 50 runs are successful, then we report
Fros =
n1
i=1
100
123
862 N. Padhye et al.
300
250
200
150
100
50
10
8
6
4
2
Optima
(a)
0
2 4 6 8 10 0
300
250
200
150
100
50
10
5
0
Optima
5
(b)
10
10 5
0 5 10
10
300
250
200
150
100
50
9
8
7
6
5
4
3
2
Optima
1
0
(c)
1 0 1 2
3 4 5 6 7 8 9 101
Fig. 7 Location of optimum for Felp: a on the boundary b in the center and c close to the edge of the
boundary by selecting different search domains
the number of successful runs in the bracket. In this case, the best, median and worst number of function evaluations are computed from the successful runs only. If none of the runs are successful, we denote this by marking (DNC) (Did Not Converge). In such cases, we report the best, median and worst attained function values of the best solution at the end of each run. To distinguish the unsuccessful results from successful ones, we present the tness value information of the unsuccessful runs in italics.
An in-depth study on the constraint-handling techniques is carried out in this paper. Different locations of the optimum are selected and systematic comparisons are carried out for PSO, DE and GAs in Sects. 4.1, 4.3 and 4.4 , respectively.
123
Feasibility preserving constraint-handling strategies 863
4.1 Results with particle swarm optimization (PSO)
In PSO, decision variable and the velocity terms are updated independently. Let us say, that the initial position is
xt, the newly created position is infeasible and represented
yt.
If the velocity update is based on the infeasible solution as:
vt+1 = xt+1 xt (15)
then, we refer to this as Velocity Unchanged. However, if the velocity update is based on the repaired location as:
vt+1 = yt xt, (16)
then, we refer to this as Velocity Recomputed. This terminology is used for rest of the paper. For inverse parabolic (IP) and exponential (Exp) approaches, we use Velocity Recomputed strategy only. We have performed Velocity Unchanged strategy with IP and exponential approaches, but the results were not as good as compared to Velocity Recomputed strategy. For the SetOnBoundary approach, we use the Velocity Recomputed strategy and two other strategies discussed as follows.
Another strategy named Velocity Reection is used, which simply implies that if a particle is set on the i-th boundary, then vt+1i is changed to vt+1i. The goal of
the velocity reection is to explicitly allow particles to move back into the search space. In the Velocity Set to Zero strategy, if a particle is set on the i-th boundary, then the corresponding velocity component is set to zero i.e. vt+1i = 0. For the shrink
approach, both Velocity Recomputed and Velocity Set to Zero strategies are used.
For PSO, a recently proposed hyperbolic [11] constraint-handling approach is also included in this study. This strategy operates by rst calculating velocity according to the standard mechanism 15, and in the case of violation a linear normalization is performed on the velocity to restrict the solution from jumping out of the constrained boundary as follows:
vi,t+1 =
vi,t+1 1 +
|vi,t+1| min(x(U)ixi ,xi,t x(L)i)
by
xt+1, and the repaired solution is denoted by
. (17)
Essentially, the closer the particle gets to the boundary (e.g., xi,t only slightly smaller than x(U)i), the more difcult it becomes to reach the boundary. In fact, the particle is never completely allowed to reach the boundary as the velocity tends to zero. We emphasize again that this strategy is only applicable to PSO. A standard PSO is employed in this study with a population size of 100. The results for all the above scenarios with PSO are presented in Tables 1, 2, 3 and 4.
The extensive simulation results are summarized using the following method. For each (say j) of the 14 approaches, the corresponding number of the successful applications (j ) are recorded. Here, an application is considered to be successful if more than 45 runs out of 50 runs are able to nd the optimum within the specied accuracy.
123
864 N. Padhye et al.
Table 1 Results on Felp with PSO for 1010 termination criterion
Strategy Velocity update Best Median Worst
Felp in [0,10]: on the boundary
IP spread Recomputed 39,900 47,000 67,000
IP conned Recomputed 47,900 (49) 88,600 140,800 Exp. spread Recomputed 3.25e01 5.02e01 1.08e+00
Exp. conned Recomputed 4600 5900 7500 Periodic Recomputed 3.94e+02 (DNC) 6.63e+02 1.17e+03 Periodic Unchanged 8.91e+02 (DNC) 1.03e+03 1.34e+03 Random Recomputed 1.97e+01 (DNC) 3.37e+01 8.10e+01 Random Unchanged 5.48e+02 (DNC) 6.69e+02 9.65e+02 SetOnBoundary Recomputed 900 (44) 1,300 5,100 SetOnBoundary Reected 242,100 387,100 811,400 SetOnBoundary Set to zero 1300 (48) 1900 4100 Shrink Recomputed 8200 (49) 10,900 14,300 Shrink Set to zero 33,000 40,700 53,900 Hyperbolic Modied (Eq. 17) 14,100 15,100 16,500 Felp in [10,10]: at the center
IP spread Recomputed 31,600 34,000 37,900 IP conned Recomputed 30,900 33,800 38,500 Exp. spread Recomputed 30,500 34,700 38,300 Exp. conned Recomputed 31,900 35,100 38,200 Periodic Recomputed 32,200 35,100 37,900 Periodic Unchanged 33,800 36,600 41,200 Random Recomputed 31,900 34,800 37,400 Random Unchanged 31,600 34,900 38,100 SetOnBoundary Recomputed 31,900 35,500 40,500 SetOnBoundary Reected 50,800 (38) 83,200 484,100 SetOnBoundary Set to Zero 31,600 35,000 37,200 Shrink Recomputed 32,000 34,400 48,200 Shrink Set to zero 31,400 34,000 37,700 Hyperbolic Modied (Eq. 17) 29,400 31,200 34,700 Felp in [1,10]: close to boundary
IP spread Recomputed 28,200 31,900 35,300IP conned Recomputed 28,300 32,900 44,600 Exp. spread Recomputed 28,300 30,700 33,200 Exp. conned Recomputed 29,500 33,000 44,700 Periodic Recomputed 4.86e+01 (DNC) 1.41e+02 4.28e+02 Periodic Unchanged 2.84e+02(DNC) 5.46e+02 8.28e+02 Random Recomputed 36,900 41,900 45,600
123
Feasibility preserving constraint-handling strategies 865
Table 1 continued
Strategy Velocity update Best Median Worst
Random Unchanged 1.13e+02(DNC) 2.26e+02 4.35e+02 SetOnBoundary Recomputed 1.80e+01 (DNC) 7.60e+01 3.00e+02 SetOnBoundary Reected 2.13e01 (DNC) 2.17e+01 1.06e+02
SetOnBoundary Set to Zero 31,700 (2) 31,700 32,600 Shrink Recomputed 29,500 (6) 36,100 42,300 Shrink Set to Zero 28,400 (36) 32,700 65,600 Hyperbolic Modied (Eq. 17) 25,900 29,200 31,000
Table 2 Results on Fsch with PSO for 1010 termination criterion
Strategy Velocity Best Median Worst
Fsch in [0,10]: on the boundary
IP spread Recomputed 67,200 257,800 970,400
IP conned Recomputed 112,400 (6) 126,500 145,900 Exp. spread Recomputed 3.79e+00 (DNC) 8.37e+00 1.49e+01 Exp. conned Recomputed 4900 6100 13,500 Periodic Recomputed 4.85e+03 (DNC) 7.82e+03 1.34e+04 Periodic Unchanged 7.69e+03 (DNC) 1.11e+04 1.51e+04 Random Recomputed 2.61e+02 (DNC) 5.44e+02 1.05e+03 Random Unchanged 5.30e+03 (DNC) 7.60e+03 1.22e+04 SetOnBoundary Recomputed 800 (30) 1100 3900 SetOnBoundary Reected 171,500 241,700 434,200 SetOnBoundary Set to Zero 1000 (40) 1600 5300 Shrink Recomputed 6900 9100 11,600 Shrink Set to zero 17,900 31,900 49,800 Hyperbolic Modied (Eq. 17) 36,400 41,700 48,700 Fsch in [10,10]: at the center
IP spread Recomputed 106,700 127,500 144,300IP conned Recomputed 111,500 130,100 149,900Exp. spread Recomputed 112,300 131,400 149,000Exp. conned Recomputed 116,400 131,300 148,200 Periodic Recomputed 113,400 130,900 150,600 Periodic Unchanged 121,200 137,800 159,100 Random Recomputed 112,900 129,800 151,100 Random Unchanged 117,000 130,600 148,100 SetOnBoundary Recomputed 118,500 (49) 132,300 161,100 SetOnBoundary Reected 3.30e06 (DNC) 8.32e+01(DNC) 2.95e+02 (DNC)
SetOnBoundary Set to zero 111,900 132,200 149,700 Shrink. Recomputed 111,800 (49) 131,800 183,500 Shrink. Set to zero 108,400 125,100 143,600
123
866 N. Padhye et al.
Table 2 continued
Strategy Velocity Best Median Worst
Hyperbolic Modied (Eq. 17) 101,300 117,700 129,700 Fsch in [1,10]: close to boundary
IP spread Recomputed 107,200 130,400 272,400IP conned Recomputed 120,100 (44) 171,200 301,200Exp. spread Recomputed 92,800 109,200 126,400Exp. conned Recomputed 110,200 127,400 256,100 Periodic Recomputed 8.09e+02 (DNC) 2.01e+03 (DNC) 5.53e+03(DNC) Periodic Unchanged 2.16e+03 (DNC) 4.36e+03 (DNC) 6.87e+03 (DNC) Random Recomputed 123,300 165,600 280,000 Random Unchanged 8.17e+02 (DNC) 1.96e+03 (DNC) 2.68e+03 (DNC) SetOnBoundary Recomputed 2.50e+00 (DNC) 1.25e+01 (DNC) 5.75e+02 (DNC) SetOnBoundary Reected 1.86e+00 (DNC) 7.76e+00 (DNC) 5.18e+01 (DNC) SetOnBoundary Set to zero 1.00e+00 (DNC) 5.00e+00 (DNC) 4.21e+02 (DNC) Shrink Recomputed 5.00e-01 (DNC) 3.00e+00 (DNC) 1.60e+01 (DNC) Shrink Set to zero 108,300 (8) 130,300 143,000 Hyperbolic Modied (Eq. 17) 93,100 108,300 119,000
It is observed that IP-S is successful in 10 out of 12 problem instances. Exponential conned approach (Exp-C) is successful in 9 cases. To investigate the required number of function evaluations (FE) needed to nd the optimum, by an approach (say j), we compute the average number of FE jk needed to solve a particular problem (k) and
construct the following objective for j-th approach:
FE-ratioj =
1 j
12
k=1
FEjk
FE jk
, (18)
where FEjk is the FEs needed by the j-th approach to solve the k-th problem. Figure 8 shows the performance of each ( j-th) of the 14 approaches on the two-axes plot (j
and FE-ratioj ).
The best approaches should have large j values and small FE-ratioj values. This results in a trade-off between three best approaches which are marked in lled circles. All other 11 approaches are dominated by these three approaches. The SetBound (SetOnBoundary) with velocity set to zero performs in only six out of 12 problem instances. Thus, we ignore this approach. There is a clear trade-off between IP-S and Exp-C approaches. IP-S solves one problem more, but requires more FEs in comparison to Exp-C. Hence, we recommend the use of both of these methods vis-avis all other methods used in this study.
Other conclusions of this extensive study of PSO with different constraint-handling methods are summarized as follows:
123
Feasibility preserving constraint-handling strategies 867
Table 3 Results on Fack with PSO for 1010 termination criterion
Strategy Best Median Worst
Fack in [0,10]: on the boundary
IP spread Recomputed 150,600 (49) 220,900 328,000
IP conned Recomputed 4.17e+00 (DNC) 6.53e+00 8.79e+00 Exp. spread Recomputed 2.76e01 (DNC) 9.62e01 2.50e+00
Exp. conned Recomputed 7800 9600 11,100 Periodic Recomputed 6.17e+00 (DNC) 6.89e+00 9.22e+00 Periodic Unchanged 8.23e+00 (DNC) 9.10e+00 9.68e+00 Random Recomputed 3.29e+00 (DNC) 3.40e+00 4.19e+00 Random Unchanged 6.70e+00 (DNC) 7.46e+00 8.57e+00 SetOnBoundary Recomputed 800 1100 2100 SetOnBoundary Reected 420,600 598,600 917,400 SetOnBoundary Set to zero 1100 1800 3100 Shrink. Recomputed 33,800 (5) 263,100 690,400 Shrink. Set zero 3.65e+00 (DNC) 6.28e+00 8.35e+00 Hyperbolic Modied (Eq. 17) 24,600 (25) 26,100 28,000 Fack in [10,10]: at the center
IP spread Recomputed 53,900 (46) 58,600 66,500 IP conned Recomputed 54,800 (49) 59,200 64,700 Exp. spread Recomputed 55,100 59,300 63,600 Exp. conned Recomputed 56,800 59,600 65,000 Periodic Recomputed 55,700 (48) 59,900 64,700 Periodic Unchanged 57,900 (49) 62,100 66,700 Random Recomputed 55,100 (47) 59,400 65,100 Random Unchanged 56,300 59,700 65,500 SetOnBoundary Recomputed 55,100 (49) 58,900 65,400 SetOnBoundary Reected 86,900 (4) 136,400 927,600 SetOnBoundary Set to zero 53,900 (49) 59,600 67,700 Shrink Recomputed 55,800 (47) 58,700 65,800 Shrink Set to zero 55,700 (49) 58,900 62,000 Hyperbolic Modied (Eq. 17) 52,900 (49) 56,200 64,400 Fack in [1,10]: close to boundary
IP spread Recomputed 54,600 (5) 55,100 56,600IP conned Recomputed 63,200 (1) 63,200 63,200 Exp. spread Recomputed 51,300 55,200 58,600 Exp. conned Recomputed 1.42e+00 (DNC) 2.17e+00 2.92e+00 Periodic Recomputed 2.88e+00 (DNC) 4.03e+00 5.40e+00 Periodic Unchanged 6.61e+00 (DNC) 7.46e+00 8.37e+00 Random Recomputed 60,300 (45) 66,200 72,200
123
868 N. Padhye et al.
Table 3 continued
Strategy Best Median Worst
Random Unchanged 4.21e+00 (DNC) 4.93e+00 6.11e+00 SetOnBoundary Recomputed 2.74e+00 (DNC) 3.16e+00 3.36e+00 SetOnBoundary Reected 824,700 (1) 824,700 824,700 SetOnBoundary Set to zero 1.70e+00 (DNC) 2.63e+00 3.26e+00 Shrink Recomputed 1.45e+00 (DNC) 2.34e+00 2.73e+00 Shrink Set to zero 2.01e+00 (DNC) 3.96e+00 6.76e+00 Hyperbolic Modied (Eq. 17) 50,000 (39) 53,500 58,100
Table 4 Results on Fros with PSO for 1010 termination criterion
Strategy Best Median Worst
Fros in [1,10]: on the boundary
IP spread Recomputed 89,800 195,900 243,300
IP conned Recomputed 23,800 164,300 209,300 Exp. spread Recomputed 9.55e01 (DNC) 2.58e+00 7.64e+00
Exp. conned Recomputed 3700 128,100 344,400 Periodic Recomputed 1.24e+04 (DNC) 2.35e+04 4.24e+04 Periodic Unchanged 6.99e+04 (DNC) 1.01e+05 1.45e+05 Random Recomputed 6.00e+01 (DNC) 1.37e+02 4.42e+02 Random Unchanged 2.32e+04 (DNC) 3.90e+04 8.22e+04 SetOnBoundary Recomputed 900 (45) 1600 89,800 SetOnBoundary Reected 2.14e03 (DNC) 6.01e+02 5.10e+04
SetOnBoundary Set to zero 1400 (48) 3000 303,700 Shrink. Recomputed 3,900 (44) 5,100 406,000 Shrink. Set to zero 15,500 136,200 193400 Hyperbolic 177,400 (45) 714,300 987,500 Fros in [8,10]: near the center
IP spread Recomputed 302,300 (28) 774,900 995,000 IP conned Recomputed 296,600 (32) 729,000 955,000 Exp. spread Recomputed 208,800 (24) 754,700 985,200 Exp. conned Recomputed 301,100 (33) 801,400 961,800 Periodic Recomputed 26,200 (27) 705,100 986,200 Periodic Unchanged 247,300 (32) 776,800 994,900 Random Recomputed 311,200 (30) 809,300 990,800 Random Unchanged 380,100 (29) 793,300 968,300 SetOnBoundary Recomputed 248,700 (35) 795,600 973,900 SetOnBoundary Reected 661,900 (01) 661,900 661,900 IP spread Recomputed 184,600 (47) 442,200 767,500 IP conned Recomputed 229,900 (40) 457,600 899,200 Exp. spread Recomputed 19,400 (47) 378,200 537,300
123
Feasibility preserving constraint-handling strategies 869
Table 4 continued
Strategy Best Median Worst
SetOnBoundary Set to zero 117,400 (25) 858,400 995,400Shrink Recomputed 347,900 (33) 790,500 996,300Shrink Set to zero 353,300 (26) 788,700 986,800 Hyperbolic Modied (Eq. 17) 6.47e-08 (DNC) 1.27e-04 (DNC) 6.78e+00 (DNC) Fros in [1,10]: close to boundary
Exp. conned Recomputed 6.79e03 (DNC) 4.23e+00 6.73e+01
Periodic Recomputed 1.51e02 (DNC) 3.73e+00 5.17e+02
Periodic Unchanged 1.92e+04 (DNC) 2.86e+04 6.71e+04 Random Recomputed 103,800 432,200 527,200 Random Unchanged 2.33e+02 (DNC) 1.47e+03 4.23e+03 SetOnBoundary Recomputed 1.71e+01 (DNC) 1.87e+01 3.13e+02 SetOnBoundary Reected 6.88e+00 (DNC) 5.52e+02 2.14e+04 SetOnBoundary Set to zero 6.23e+00 (DNC) 1.80e+01 3.12e+02 Shrink Recomputed 350,300 (3) 350,900 458,400 Shrink Set to zero 163,700 (26) 418,000 531,900 Hyperbolic Modied (Eq. 17) 920,900 (1) 920,900 920,900
Fig. 8 Performance comparison
of 14 approaches for two
metricsnumber of problems
solved successfully and function
evaluation ratiowith the PSO
algorithm
4
SetBoundRf
Approaches on PSO
Outcome of 14
ExpC
IPS
PeriodicUn
RandomUn
PeriodicRe
Shrink0 Hyp
SetBoundRe ShrinkRe
RandomRe IPCExpS
SetBound0
3
2
1.5
FERatio
1.0
0.8
0.7
0.60.5
0.4
3 4 5 6 7 8
9 10
Number of Successful Problems
1. The constraint-handling methods show a large variation in the performance depending on the choice of test problem and location of the optimum in the allowable variable range.
2. When the optimum is on the variable boundary, periodic and random allocation methods perform poorly. This is expected intuitively.
3. When the optimum is on the variable boundary, methods that set infeasible solutions on the violated boundary (SetOnBoundary methods) work very well for obvious reasons, but these methods do not perform well for other cases.
123
870 N. Padhye et al.
101
100
101
102
103
4
10 1 2 3 4 5 6 7 8 9 10
Distance (x)
Fig. 9 Probability distribution function with different values. The x-axis denotes the increased distance
from the violated boundary. x = 1 means the violated boundary. The child is created at x = 0
4. When the optimum lies near the center of the allowable range, most constraint-handling approaches work almost equally well. This can be understood intuitively from the fact that tendency of particles to y out of the search space is small when the optimum is in the center of the allowable range. For example, the periodic approaches fail in all the cases but are able to demonstrate some convergence characteristics for all test problems, when the optimum is at the center. When the optimum is on the boundary or close to the boundary, then the effect of the chosen constraint-handling method becomes critical.
5. The shrink method (with Velocity Recomputed and Velocity Set Zero strategies) succeeded in 10 of the 12 cases.
4.2 Parametric study of
The proposed IP approaches involve a parameter affecting the variance of the probability distribution for the mapped variable. In this section, we perform a parametric study of to determine its effect on the performance of the IP-S approach.
Following values are chosen: 0.1, 1, 10, and 1, 000. To have an idea of the effect of , we plot the probability distribution of mapped values in the allowable range ([1, 10] : OntheBoundary) for d = 1.0 in Fig. 9. It can be seen that for = 10 and
1000, the distribution is almost uniform.
Figure 10 shows the effect of on Felp problem. For the same termination criterion we nd that = 0.1 and 1.0 perform better compared to other values. With larger
values of the IP-S method does not even nd the desired solution in all 50 runs.
Alpha 0.1 Alpha 1.0 Alpha 10 Alpha 1000
Probability Distribution Function
123
Feasibility preserving constraint-handling strategies 871
1e+06
100000
10000
Fig. 10 Performance of PSO algorithm with the IP-S approach with different values on Felp problem
4.3 Results with differential evolution (DE)
Differential evolution, originally proposed in [24], has gained popularity as an efcient evolutionary optimization algorithm. The developers of DE proposed a total of ten different strategies [20]. In [2] it was shown that performance of DE largely depended upon the choice of constraint-handling mechanism. We use Strategy 1 (where the offspring is created around the population-best solution), which is most suited for solving unimodal problems [18]. A population size of 50 was chosen with parameter values of C R = 0.5 and F = 0.7. Other parameters are set the same as before.
We use S = 1010 as our termination criterion. Results are tabulated in Tables 5,
6, 7 and 8. Following two observations can be drawn:
1. For problems having optimum at one of the boundaries, SetOnBoundary approach performs the best. This is not a surprising result.
2. However, for problems having the optimum near the center of the allowable range, almost all eight algorithms perform in a similar manner.
3. For problems having their optimum close to one of the boundaries, the proposed IP and existing exponential approaches perform better than the rest of the approaches with DE.
Despite the differences, somewhat similar performances of different constraint-handling approaches with DE indicates that the DE is an efcient optimization algorithm and its performance is somewhat less dependent on the choice of constraint-handling scheme compared to the PSO algorithm.
4.4 Results with real-parameter genetic algorithms (RGAs)
We have used two real-parameter GAs in our study here:
1. Standard-RGA using the simulated binary crossover (SBX) [9] operator and the polynomial mutation operator [8]. In this approach, variables are expressed as real
(30) (29)
Function Evaluations
0.1 1 10 100 1000
Alpha
123
872 N. Padhye et al.
Table 5 Results on Felp with DE for 1010 termination
criterion
Strategy Best Median Worst
Felp in [0,10]: on the boundaryIP spread 25,600 26,850 27,650
IP conned 22,400 23,550 24,200 Exp. spread 38,350 39,800 41,500 Exp. conned 19,200 20,700 21,900 Periodic 42,400 43,700 45,050 Random 40,650 43,050 44,250 SetOnBoundary 2850 3350 3900 Shrink 4050 4900 5850 Felp in [10,10]: at the center
IP spread 29,950 31,200 32,500 IP conned 29,600 31,200 32,400 Exp. spread 29,950 31,300 32,400 Exp. conned 30,500 31,400 32,250 Periodic 29,650 31,300 32,400 Random 30,000 31,200 31,250 SetOnBoundary 29,850 31,200 32,700 Shrink 30,300 31,250 32,750 Felp in [1,10]: close to boundary
IP spread 28,550 29,600 30,550 IP conned 28,500 29,500 30,650 Exp. spread 28,050 28,900 29,850 Exp. conned 28,150 29,050 29,850 Periodic 29,850 30,850 32,100 Random 28,900 30,200 31,000 SetOnBoundary 28,650 29,600 30,500 Shrink 28,800 29,900 31,200
numbers initialized within the allowable range of each variable. The SBX and polynomial mutation operators can create infeasible solutions. Violated boundary, if any, is handled using one of the approaches studied in this paper. Later we shall investigate a rigid boundary implementation of these operators which ensures creation of feasible solutions in every recombination and mutation operations.2. Elitist-RGA in which two newly created offsprings are compared against the two parents, and the best two out of these four solutions are retained as parents (thereby introducing elitism). Here, the offspring solutions are created using non-rigid versions of SBX and polynomial mutation operators. As before, we test eight different constraint-handling approaches and, later explore a rigid boundary implementation of the operators in presence of elite preservation.
123
Feasibility preserving constraint-handling strategies 873
Table 6 Results on Fsch with DE for 1010 termination
criterion
Strategy Best Median Worst
Fsch in [0,10]: on the boundaryIP spread 26,600 27,400 28,000
IP conned 22,450 23,350 24,300 Exp. spread 40,500 42,050 43,200 Exp. conned 19,650 20,350 22,050 Periodic 44,700 46,300 48,250 Random 43,850 45,150 47,000 SetOnBoundary 2100 3100 3750 Shrink 3450 4400 5100 Fsch in [10,10]: at the center
IP spread 258,750 281,650 296,300 IP conned 268,150 283,050 300,450 Exp. spread 266,850 283,950 304,500 Exp. conned 266,450 283,700 305,550 Periodic 269,700 284,100 310,100 Random 263,300 282,600 306,250 SetOnBoundary 267,750 284,550 298,850 Shrink 263,600 282,750 304,350 Fsch in [1,10]: close to boundary
IP spread 228,950 242,300 255,700 IP conned 232,200 243,900 263,400 Exp. spread 227,550 243,000 261,950 Exp. conned 228,750 243,800 262,500 Periodic 231,950 247,150 260,700 Random 228,550 244,850 261,900 SetOnBoundary 237,100 255,750 266,400 Shrink 234,000 253,250 275,550
Parameters for RGAs are chosen as follows: population size of 100, crossover probability pc = 0.9, mutation probability pm = 0.05, distribution index for crossover ndist.,c = 2, distribution index for mutation ndist.,m = 100.
The results for the Standard-RGA are shown in Tables 9, 10, 11 and 12 for four different test problems. Tables 13, 14, 15 and 16 show results using the Elitist-RGA. Following key observations can be made:
1. For all the four test problems, Standard-RGA shows convergence only in the situation when optima is on the boundary.
2. Elitist-RGA shows good convergence on Felp when the optimum is on the boundary and, only some convergence is noted when the optima is at the other locations. For other three problems, convergence is only obtained when optimum is present on the boundary.
123
874 N. Padhye et al.
Table 7 Results on Fack with DE for 1010 termination
criterion
Strategy Best Median Worst
Fack in [0,10]: on the boundaryIP spread 43,400 44,950 45,950
IP conned 37,300 38,700 40,350 Exp. spread dist. 66,300 69,250 71,300 Exp. conned dist 32,750 34,600 36,200 Periodic 72,500 74,250 75,900 Random 70,650 73,000 74,750 SetOnBoundary 2550 3250 3950 Shrink 3500 4700 5300 Fack in [10,10]: at the center
IP spread 50,650 52,050 53,450 IP conned 51,050 52,200 53,800 Exp. spread 51,200 52,150 53,400 Exp. conned 51,100 52,300 53,850 Periodic 51,250 52,250 53,500 Random 50,950 52,200 53,450 SetOnBoundary 50,950 52,300 53,450 Shrink 50,450 52,300 53,550 Fack in [1,10]: close to boundary
IP spread 49,100 50,650 51,650 IP conned 48,650 50,400 52,100 Exp. spread 48,300 49,900 51,750 Exp. conned 48,900 50,000 51,250 Periodic 50,400 51,950 53,300 Random 50,250 51,200 52,150 SetOnBoundary 49,900 (33) 51,100 53,150 Shrink 50,200 51,400 52,750
3. Overall, the performance of Elite-RGA is comparable or slightly better compared to Standard-RGA.
The Did Not Converge cases can be explained on the fact that the SBX operator has the property of creating solutions around the parents; if parents are close to each other. This property is a likely cause of premature convergence as the population gets closer to the optima. Furthermore the results suggest that the elitism implemented in this study (parent-child comparison) is not quite effective.
Although RGAs are able to locate the optima, however, they are unable to ne-tune the optima due to undesired properties of the generation scheme. This emphasizes the fact that generation scheme is primarily responsible for creating good solutions, and the constraint-handling methods cannot act as surrogate for generating efcient solutions. Each step of the evolutionary search should be designed effec-
123
Feasibility preserving constraint-handling strategies 875
Table 8 Results on Fros with DE for 1010 termination criterion
Strategy Best Median Worst
Fros in [1,10]: on the boundary
IP spread 38,850 62,000 89,700
IP conned 24,850 45,700 73,400 Exp. spread 57,100 86,800 118,600 Exp. conned 16,600 21,400 79,550 Periodic 69,550 93,500 18,1150 Random 65,850 92,950 157,600 SetOnBoundary 2950 4700 30,450 Shrink 5450 8150 55,550 Fros in [8,10]: at the center
IP spread 133,350 (41) 887,250 995,700 IP conned 712,500 (44) 854,800 991,400 Exp. spread 390,700 (48) 866,150 998,950 Exp. conned 138,550 (40) 883,500 994,350 Periodic 764,650 (39) 874,700 999,650 Random 699,400 (49) 885,450 999,600 SetOnBoundary 743,600 (38) 865,450 995,500 Shrink 509,900 (40) 873,450 998,450 Fros in [0,10]: close to boundary
IP spread 36,850 78,700 949,700 IP conned 46,400 (46) 95,900 891,450 Exp. spread 49,550 (49) 85,900 968,200 Exp. conned 87,300 (43) 829,200 973,350 Periodic 38,750 62,200 94,750 Random 41,200 61,300 461,500 SetOnBoundary 8.23E+00 (DNC) 1.62E+01 1.89E+01 Shrink 252,650 (9) 837,700 985,750
tively in order to achieve overall success. On the other hand one could argue that strategies such as increasing the mutation rate (in order to promote diversity so as to avoid pre-mature convergence) should be tried, however, creation of good and meaningful solutions in the generation scheme is rather an important and a desired fundamental-feature.
As expected, when the optima is on the boundary SetOnBoundary nds the optima most efciently within a minimum number of function evaluations. Like in PSO the performance of Exp. conned is better than Exp. spread. Periodic and Random show comparable or slightly worse performances (these mechanisms dont have any preference of creating solutions close to the boundary and actually promote spread of the population).
123
876 N. Padhye et al.
Table 9 Results on Felp with Standard-RGA for termination
criterion of 1010
Strategy Best Median Worst
Felp in [0,10]
IP spread 9200 10,500 12,900
IP conned 7900 9400 10,900 Exp. spread 103,100 (6) 718,900 931,200 Exp. conned 4500 5700 7000 Periodic 15,200 (1) 15,200 15,200 Random 68,300 (12) 314,700 939,800 SetOnBoundary 1800 2400 2800 Shrink 3700 5100 6600 Felp in [10,10]
2.60e02 (DNC)
Felp in [1,10]1.02e02 (DNC)
Table 10 Results on Fsch with standard-RGA for termination
criterion of 1010
Strategy Best Median Worst
Fsch in [0,10]
IP spread 6800 9800 11,800
IP conned 6400 8200 10,300 Exp. spread 21,200 (47) 180,000 772,200 Exp. conned 4300 5500 6300 Periodic 14,800 (26) 143,500 499,400 Random 8700 (43) 195,200 979,300 SetOnBoundary 1800 2300 2900 Shrink 3600 4600 5500 Fsch in [10,10]
1.20e01 (DNC)
Fsch in [1,10]8.54e02 (DNC)
4.4.1 RGAs with rigid boundary
We also tested RGA (standard and its elite version) with a rigid bound consideration in its operators. In this case, the probability distributions of both SBX and polynomial mutation operator are changed in a way so as to always create a feasible solution. It is found that the Standard-RGA with rigid bounds shows convergence only when optimum is on the boundary (Table 17). The performance of Elite-RGA with rigid bounds is slightly better (Table 18). Overall, SBX operating within the rigid bounds is found to perform slightly better compared to the RGAs employing boundary-handling mechanisms. However, as mentioned earlier, in the scenarios where the generation scheme cannot guarantee creation of feasible only solutions there is a necessary need for constraint-handling strategies.
123
Feasibility preserving constraint-handling strategies 877
Table 11 Results on Fack with standard-RGA for termination
criterion of 1010
Strategy Best Median Worst
Fack in [0,10]
IP spread 12,100 22,600 43,400
IP conned 9800 13,200 16,400 Exp. spread 58,100 (29) 355,900 994,000 Exp. conned 6300 9100 11,900 Periodic 19,600 (46) 122,300 870,200 Random 35,700 (38) 229,200 989,500 SetOnBoundary 1800 2500 3100 Shrink 4200 5700 8600 Fack in [10,10]
7.76e02(DNC)
Fack in [1,10]4.00e02 (DNC)
Table 12 Results on Fros with standard-RGA for termination criterion of 1010
Strategy Best Median Worst
Fros in [1,10]: on the boundaryIP spread 12,400 (39) 15,800 20,000
IP conned 9400 (39) 11,800 13,600 Exp. spread 9.73e+00 (DNC) 1.83e+00 2.43e+01 Exp. conned 6000 6900 8200 Periodic 6.30E+01 (DNC) 4.92e+02 5.27e+04 Random 3.97e+02 (DNC) 9.28e+02 1.50e+03 SetOnBoundary 1900 2700 3400 Shrink 4100 5200 6500 Fros in [8,10]
3.64e+00 (DNC)
Fros in [1,10]: on the boundary1.04e+01(DNC)
5 Higher-dimensional problems
As the dimensionality of the search space increases it becomes difcult for a search algorithm to locate the optimum. Constraint-handling methods play even a more critical role in such cases. So far in this study, DE has been found to be the best algorithm. Next, we consider all four unimodal test problems with an increasing problem size: n = 20, 50, 100, 200, 300, and 500. For all problems the variable bounds were chosen
such that optima occured near the center of the search space. No population scaling is used for DE. The DE parameters are chosen as F = 0.8 and C R = 0.9. For
= 1.2 we were able to achieve a high degree of convergence and results are shown
in Fig. 11. As seen from the gure, although it is expected that the required number
123
878 N. Padhye et al.
Table 13 Results on Felp with Elite-RGA for 1010 termination criterion
Strategy Best Median Worst
Felp in [0,10]
IP spread 6600 8000 9600
IP conned 6300 8100 9800 Exp. spread 4800 6900 8300 Exp. conned 4600 5800 6700 Periodic 6500 8800 11,500 Random 6400 7900 10,300 SetOnBoundary 2200 2600 3500 Shrink 4000 5200 6700 Felp in [10,10]
IP spread 980,200 (1) 980,200 980,200 IP conned 479,000 (1) 479,000 479,000 Exp. spread 2.06e01 (DNC) 4.53e01 4.86e01
Exp. conned 954,400 (1) 954,400 954,400 Periodic 1.55E01 (DNC) 2.48E01 2.36E01
Random 1.92E01 (DNC) 2.00E01 2.46E01
SetOnBoundary 2.11E01 (DNC) 2.95E01 1.94E01
Shrink 530,900 (3) 654,000 779,000 Felp in [1,10]
IP spread 803,400 (5) 886,100 947,600 IP conned 643,300 (2) 643,300 963,000 Exp. spread 593,300 (3) 628,900 863,500 Exp. conned 655,400 (3) 940,500 946,700 Periodic 653,800 (3) 842,900 843,100 Random 498,500 (2) 498,500 815,500 SetOnBoundary 593,800 (5) 870,500 993,500 Shrink 781,000 (2) 781,000 928,300
of function evaluations would increase with the number of variables, the increase is sub-quadratic. Each case is run 20 times and the termination criteria is set as 1010.
All 20 runs are found to be successful in each case, demonstrating the robustness of the method in terms of nding the optimum with a high precision. Particularly problems with large variables, complex search spaces and highly nonlinear constraints, such a methodology should be useful in terms of applying the method to different real-world problems.
It is worth mentioning that authors independently tested other scenarios for large scale problems with corresponding optimum on the boundary, and in order to achieve convergence with IP methods we had to signicantly reduce the values of . Without lowering , particularly, PSO did not show any convergence. As expected in larger dimensions the probability to sample a point closer to the boundary decreases and
123
Feasibility preserving constraint-handling strategies 879
Table 14 Results on Fsch with Elite-RGA for termination
criterion of 1010
Strategy Best Median Worst
Fsch in [0,10]
IP spread 5000 6500 7900
IP conned 4900 6500 7900 Exp. spread 4300 5800 7800 Exp. conned 4300 4900 5600 Periodic 5400 7000 11,300 Random 5300 6600 8500 SetOnBoundary 1600 2200 2600 Shrink 3100 4200 5400 Fsch in [10,10]
8.12e05(DNC)
Fsch in [1,10]1.61e01(DNC)
Table 15 Results on Fack with Elite-RGA for termination
criterion of 1010
Strategy Best Median Worst
Fack in [0,10]
IP spread 6300 8700 46,500
IP conned 6800 9200 32,000 Exp. spread 5600 6800 8700 Exp. conned 5200 7800 9900 Periodic 6300 9300 12,200 Random 6200 8300 53,700 SetOnBoundary 1900 2500 4000 Shrink 3900 5100 7700 Fack in [10,10]
1.03e01(DNC)
Fack in [1,10]1.15e00 (DNC)
hence a steeper distribution is needed. However, this highlights the usefulness of the parameter to modify the behavior of the IPMs so as to yield the desired performance.
6 General purpose constraint-handling
So far we have carried out simulations on problems where constraints have manifested as the variable bounds. The IP methods proposed in this paper can be easily extended and employed for solving nonlinear constrained optimization problems (inclusive of variable bounds).3
3 By General Purpose Constraint-Handling we imply tackling of all variable bounds, inequality constraints and equality constraints.
123
880 N. Padhye et al.
Table 16 Results on Fros with Elite-RGA for termination
criterion of 1010
Strategy Best Median Worst
Fros in [0,10]
IP spread 9900 (13) 12,500 14,000
IP conned 10,100 (12) 12,100 14,400 Exp. spread 8500 (10) 11,000 15,400 Exp. conned 6600 (30) 7800 8900 Periodic 9500 (10) 13,300 16,800 Random 14,000 (3) 15,300 16,100 SetOnBoundary 2300 (44) 3200 4500 Shrink 4500 (32) 6100 8100 Fros in [8,10]
1.27e00 (DNC)
Fros in [1,10]: on the boundary1.49e00 (DNC)
Table 17 RGA with rigid
boundary with termination
criterion 1010
Optimum on the boundary
Strategy Best Median Worst Felp 8100 8500 8800
Fsch 7800 8100 8300 Fack 9500 10,100 10,800
Fros 10,100 (39) 10,900 143,600 Optimum in the center3.88e02 (DNC)
Optimum close to the edge of the boundary9.44e03 (DNC)
Table 18 Elitist-RGA with
rigid boundary with termination
criterion 1010
Strategy Best Median Worst
Optimum on the boundaryFelp 7300 7900 8400
Fsch 6500 6900 7500 Fack 9400 10,400 12,200
Fros 11,000 (10) 12,700 16,400 Optimum in the center1.24e01(DNC)
Optimum close to the boundary edgeFelp 579,800 (3) 885,900 908,600
Fsch 2.73E00 DNC 6.18E00 1.34E00
Fack 1.75E01 DNC 8.38E01 2.93E00
Fros 3.29E00 DNC 4.91E+00 5.44E+00
123
Feasibility preserving constraint-handling strategies 881
1e+08
Sch (1.90)
(slope=2)
Ros (1.61)
Ack(1.05)
Elp (1.12)
(slope=1)
1e+07
Function Evaluations
Function Evaluations
1e+06
100000
10000
20
50
100
200
300 500
Problem Size (n)
Fig. 11 Scale-up study on four test problems for the DE algorithm with the proposed IP-S approach shows
sub-quadratic performance in all problems. Slopes of a tted polynomial line is marked within brackets for
each problem. Linear and quadratic limit lines are shown with dashed lines
As an example, let us consider a generic inequality constraint function: gj (
x) 0
the j-th constraint in a set of J inequality constraints. In an optimization algorithm, every created (offspring) solution
xc at an iteration must be checked for its feasibility.
If
xc satises all J inequality constraints, the solution is feasible and the algorithm can proceed with the created solution. But if
xc does not satisfy one or more of J constraints, the solution is infeasible and should be repaired before proceeding further.
Let us illustrate the procedure using the inverse parabolic (IP) approach described in Sect. 3; though other constraint-handling methods discussed before may also be used. The IP approaches require us to locate intersection points
v and
u: two bounds
in the direction of (
x p xc), where
x p is one of the parent solutions that created the offspring solution (see Fig. 12). The critical intersection point can be found by nding multiple roots of the direction vector with each constraint gj (
x) and then choosing the smallest root. We dene a parameter as the extent of a point from
xc, as follows:
x() = xc +
x p xc xp xc
. (19)
x()4 in the j-th constraint function, we have the following root-nding problem for the j-th constraint:
gj (
x()) = 0. (20)
4 Note that here for calculating points should not be confused with parameter introduced in the proposed constraint-handling methods.
Substituting above expression for
123
882 N. Padhye et al.
functiong2
Fig. 12 Extension of constraint-handling approaches to constrained optimization problems
Let us say the roots of the above equation are jk for k = 1, 2 . . . , K j . The above
procedure can now be repeated for all J inequality constraints and corresponding roots can be found one at a time. Since the extent of to reach
xc is given as
p = xp xc ,
we are now ready to compute the two closest bounds (lower and upper bounds) on for our consideration, as follows:
v = max jk|0 jk p, k, j , (21) u = min jk| jk p, k, j . (22)
IP-S and IP-C approaches presented in Sect. 3 now can be used to map the violated variable value xci into the feasible region using d = v (the lower bound), du = u
(the upper bound) and d p = p (location of parent). It is clear that the only difcult
aspect of this method is to nd multiple intersection points in presence of nonlinear constraints.
For the sake of completeness, we show here that the two bounds for each variable: xi x(L)i and xi x(U)i used in previous sections can be also be treated uniformly
using the above described approach. The two bounds can be written together as follows:
xi x(L)i x(U)i xi 0. (23)
Note that a simultaneous non-positive value of each of the bracketed terms is not possible, thus only way to satisfy the above left side is to make each bracketed term non-
IPS probability
21
x
c
11
g1
v
x
p u
22
1
2
Feasible region
x p from
123
Feasibility preserving constraint-handling strategies 883
negative. The above inequality can be considered as a quadratic constraint function, instead of two independent variable bounds and treated as a single combined nonlinear constraint and by nding both roots of the resulting quadratic root-nding equation.
Finally, the above procedure can also be extended to handle equality constraints (hk(
x) = 0) with a relaxation as follows: k hk(
x) k. Again, they can be
combined together as follows:
2k (hk(
x))2 0. (24)
Alternatively, the above can also be written as k |hk(
x)| 0 and may be useful
for non-gradient based optimization methods, such as evolutionary algorithms. We now show the working of the above constraint handling procedure on a number of constrained optimization problems.
6.1 Illustrations on nonlinear constrained optimization
First, we consider the three unconstrained problems used in previous sections as f (
x),
but now add an inequality constraint by imposing a quadratic constraint that makes the solutions fall within a radius of one-unit from a chosen point
o:
There are no explicit variable bounds in the above problem. By choosing different locations of the center of the hyper-sphere (
o), we can have different scenarios of the resulting constrained optimum. If the minimum of the objective function (without constraints) lies at the origin, then setting oi = 0 the unconstrained minimum is also
the solution to the constrained problem, and this case is similar to the Optimum at the Center (but in the context of constrained optimization now). The optimal solution is at xi = 0 with f = 0. DE with IP-S and previous parameter settings is applied to
this new constrained problem, and the results from 50 different runs for this case are shown in Table 19.
As a next case, we consider oi = 2. The constrained minimum is now different
from that of the unconstrained problem, as the original unconstrained minimum is no more feasible. This case is equivalent to Optima on the Constraint Boundary. Since the optimum value is not zero as before, instead of choosing a termination criterion based on S value, we allocate a maximum of one million function evaluations for a run and record the obtained optimized solution. The best tness values for f (
x) as felp,
Strategy Best Median Worst
Felp 22,800 23,750 24,950 Fsch 183,750 206,000 229,150
Fack 42,800 44,250 45,500
x), subject to
Minimize f (
(25)
ni=1(xi oi)2 1.
Table 19 Results for test
functions with DE for oi = 0
with S = 10
10
123
884 N. Padhye et al.
Table 20 Best tness results for
test functions with DE for oi = 2
fsch and fack are shown in Table 20. For each function, we veried that the obtained optimized solution satises the KKT optimality conditions [5,22] suggesting that a truly optimum solution has been found by this procedure.
Next, we consider two additional nonlinear constrained optimization problems (TP5 and TP8) from [7] and a well-studied structural design and mechanics problem (Weld) taken from [21]. The details on the mechanics of the welded structure and the beam deformation can be found in [23,25]. These problems have multiple nonlinear inequality constraints and our goal is to demonstrate the performance of our proposed constraint-handling methods. We used DE with IP-S method with the following parameter settings: N P = 50, F = 0.7, C R = 0.5, and = 1.2 for all
three problems. A maximum of 200,000 function evaluations were allowed and a termination criteria of 103 from the known optima is chosen. The problem denitions of TP5, TP8 and Weld are as follows:
TP5:
Minimize f (
x) = (x1 10)2 + 5(x2 12)2 + x43 + 3(x4 11)2+10x65 + 7x26 + x47 4x6x7 10x6 8x7,
subject to g1(
x) 2x21 + 3x42 + x3 + 4x24 + 5x5 127,
g2(
x) 7x1 + 3x2 + 10x23 + x4 x5 282,
g3(
x) 23x1 + x22 + 6x26 8x7 196,
g4(
x) 4x21 + x22 3x1x2 + 2x23 + 5x6 11x7 0,
10 xi 10, i = 1, . . . , 7.
Minimize f (
subject tog1(
g2(
g3(
g4(
g5(
g6(
g7(
g8(
Felp Fsch Fack
640.93 0.00 8871.06 0.39 6.56 0.00
(26)
TP8:
x) = x21 + x22 + x1x2 14x1 16x2 + 2(x9 10)2+2(x6 1)2 + 5x27 + 7(x8 11)2 + 45 + (x10 7)2
+(x3 10)2 + 4(x4 5)2 + (x5 3)2
x) 4x1 + 5x2 3x7 + 9x8 105,
x) 10x1 8x2 17x7 + 2x8 0,
x) 8x1 + 2x2 + 5x9 2x10 12,
x) 3(x1 2)2 + 4(x2 3)2 + 2x23 7x4 120,
x) 5x21 + 8x2 + (x3 6)2 2x4 40,
x) x21 + 2(x2 2)2 2x1x2 + 14x5 6x6 0,
x) 0.5(x1 8)2 + 2(x2 4)2 + 3x25 x6 30,
x) 3x1 + 6x2 + 12(x9 8)2 7x10 0,
10 xi 10, i = 1, . . . , 10.
(27)
123
Feasibility preserving constraint-handling strategies 885
Weld:
x) = 1.10471h2l + 0.04811tb(14.0 + l),
subject to g1(
x) (
x) 13,600,
g2(
x) (
x) 30,000,
g3(
x) h b 0,
g4(
x) Pc(
x) 6,000,
g5(
x) (
x) 0.25,
0.125 (h, b) 5, and 0.1 (l, t) 10,
(28)
Minimize f (
where,
(
x) = ( )2 + ( )2 + (l )/ 0.25(l2 + (h + t)2),
=
6, 000
2hl ,
=
6, 000(14 + 0.5l) 0.25 (l2 + (h + t)2) 2[0.707hl(l2/12 + 0.25(h + t)2)]
,
(
x) =
504,000 t2b ,
(
x) =
2.1952 t3b ,
Pc(
x) = 64,746.022(1 0.0282346t)tb3.
The feasible region in the above problems is quite complex, unlike hypercubes in case of problems with variable bounds, and since our methods require feasible initial population, we rst identied a single feasible-seed solution (known as the seed solution), and generated other several other random solutions. Amongst the several randomly generated solutions, those infeasible, were brought into the feasible region using IP-S method and feasible-seed solution as the reference. The optimum results for TP5, TP8 and Weld, thus found, are shown in the Table 21.
To verify the obtained optimality of our solutions, we employed MATLAB sequential quadratic programming (SQP) toolbox with a termination criterion of 106 to solve each of the three problems. The solution obtained from SQP method matches with our reported solution indicating that our proposed constrained handling procedure is successful in solving the above problems.
Finally, the convergence plots of a sample run on these test problems is shown in Figs. 13, 14, and 15, respectively. From the graphs it is clear that our proposed method is able to converge to the nal solution quite effectively.
123
886 N. Padhye et al.
x ) Active constraints
T g 1 , g 4
TP824.33(2.160,2.393,8.777,5.088,0.999,1.437,1.298,9.810,8.209,8.277)T g 1 to g 6
Weld2.38(0.244,6.219,8.291,0.244)T g 1 to g 4
Optimumvalue(f ) Corresponding solution vector (
TP5680.63(2.330,1.953,0.473,4.362,0.628,1.035,1.591)
Table21ResultsfromTP5,TP8andWeldproblem
Foreachproblem,theobtainedsolutionalsosatisestheKKTconditions
123
Feasibility preserving constraint-handling strategies 887
950
TP5
900
850
Best Fitness
800
750
700
650
1 10 100 1000
Generations
Fig. 13 Convergence plot for TP5
44
TP8
42
40
38
Best Fitness
36
34
32
30
28
26
24 1 10 100 1000 10000
Generations
Fig. 14 Convergence plot for TP8
7 Conclusions
The existing constraint-handling strategies that repair solutions by bringing them back into the search spaces exhibit several inadequacies. This paper has addressed the task of studying, and proposing two new, explicit feasibility preserving constraint-handling methods for real-parameter optimization using evolutionary algorithms. Our proposed single parameter Inverse Parabolic Methods with stochastic and adaptive components, overcome limitations of existing methods and perform effectively on several test problems. Empirical comparisons on four different benchmark test problems (Felp, Fsch,
Fack, and Fros) with three different settings of the optimum relative to the variable
123
888 N. Padhye et al.
35
30
25
20
15
10
5
Fig. 15 Convergence plot for TP8
boundaries revealed key insights into the performance of PSO, GAs and DE in conjunction with different constraint-handling strategies. It was noted that in addition to the critical role of constraint-handling strategy (which is primarily responsible for bringing the infeasible solutions back into the search space), the generation scheme (child-creation step) in an evolutionary algorithm must create efcient solutions in order to proceed effectively. Exponential and Inverse Parabolic Methods were most robust methods and never failed to solve any problem. The other constraint-handling strategies were either too deterministic and/or operated without utilizing sufcient useful information from the solutions. The probabilistic methods were able to bring the solutions back into the feasible part of the search space and showed a consistent performance. In particular, scale-up studies on four problems, up to 500 variables, demonstrated sub-quadratic empirical run-time complexity with the proposed IP-S method.
Finally, the success of the proposed IP-S scheme is demonstrated on generalized nonlinear constrained optimization problems. For such problems, the IP-S method requires nding the lower and upper bounds for feasible region along the direction of search by solving a series of root-nding problems. To best of our knowledge, the proposed constraint-handling approach is a rst explicit feasibility preserving method that has demonstrated success on optimization problems with variable bounds and nonlinear constraints. The approach is arguably general, and can be applied with complex real-parameter search spaces such as non-convex, discontinuous, etc., in addition to problems dealing with multiple conicting objectives, multi-modalities, dynamic objective functions, and other complexities. We expect that proposed constraint-handling scheme will nd its utility in solving complex real-world constrained optimization problems using evolutionary algorithms. An interesting direction for the future would be to carry out one-to-one comparison between evolutionary methods employing constrained handling strategies and the classical constrained optimization algorithms. Such studies shall benet the practitioners in optimization to compare and
Weld
Best Fitness
0 0 20 40 60 80 100 120 140
Generations
123
Feasibility preserving constraint-handling strategies 889
evaluate different algorithms in a unied frame-work. In particular, further development of a robust, parameter-less and explicit feasibility preserving constraint-handling procedure can be attempted. Other probability distribution functions and utilizing of information from other solutions of the population can also be attempted.
Acknowledgments Nikhil Padhye acknowledges past discussions with Dr. C.K. Mohan on swarm intelli
gence. Professor Kalyanmoy Deb acknowledges the support provided by the J. C. Bose National fellowship
generously provided by the Department of Science and Technology (DST), Government of India.
1. Alvarez-Benitez, J.E., Everson, R.M., Fieldsend, J.E.: A MOPSO algorithm based exclusively on
pareto dominance concepts. EMO 3410, 459473 (2005)
2. Arabas, J., Szczepankiewicz, A., Wroniak, T.: Experimental comparison of methods to handle boundary
constraints in differential evolution. In: Parallel Problem Solving from Nature. PPSN XI, vol. 6239 of
Lecture Notes in Computer Science, pp. 411420. Springer, Berlin (2010)
3. Chu, W., Gao, X., Sorooshian, S.: Handling boundary constraints for particle swarm optimization in
high-dimensional search space. Inform. Sci. 181(20), 45694581 (2011)
4. Datta, R., Deb, K.: A bi-objective based hybrid evolutionary-classical algorithm for handling equality
constraints. In: Takahashi, R.H.C., Deb, K., Wanner, E.F., Greco, S. (eds.) Evolutionary Multi-Criterion
Optimization, vol. 6576 of Lecture Notes in Computer Science, pp. 313327. Springer, Berlin (2011)
5. Deb, K.: Optimization for Engineering Design: Algorithms and Examples. Prentice-Hall, New Delhi
(1995)
6. Deb, K.: Simulated binary crossover for continuous search space. Complex Syst. 9, 115148 (1995)
7. Deb, K.: An efcient constraint handling method for genetic algorithms. Comput. Methods Appl.
Mech. Eng. 186(24), 311338 (2000)
8. Deb, K.: Multi-Objective Optimization Using Evolutionary Algorithms. Wiley, New York (2002)
9. Deb, K., Agrawal, R.B.: Simulated binary crossover for continuous search space. Complex Syst. 9,
134 (1994)
10. Gandomi, A.H., Yang, X.-S.: Evolutionary boundary constraint handling scheme. Neural Comput.
Appl. 21(6), 14491462 (2012)
11. Helwig, S., Branke, J., Mostaghim, S.M.: Experimental analysis of bound handling techniques in
particle swarm optimization. IEEE Trans. Evol. Comput. 17(2), 259271 (2013)
12. Helwig, S., Wanka, R.: Particle swarm optimizatio in high-dimensional bounded search spaces. In:
Proceedings of the 2007 IEEE Swarm Intelligence Symposium, pp. 198205
13. Jensen, P.A., Bard, J.F.: Operations Research Models and Methods. Wiley, New York (2003)
14. Michalewicz, Z.: A survey of constraint handling techniques in evolutionary computation methods.
Evol. Program. 4, 135155 (1995)
15. Michalewicz, Z., Janikow, C.Z.: Genocop: a genetic algorithm for numerical optimization problems
with linear constraints. Commun. ACM 39(12es), 175 (1996)
16. Padhye, N., Branke, J., Mostaghim, S.: Empirical comparison of mopso methodsguide selection and
diversity preservation. In: Proceedings of IEEE CEC, pp. 25162523 (2009)
17. Padhye, N.: Development of efcient particle swarm optimizers and bound handling methods. Masters
thesis, IIT Kanpur, India (2010)
18. Padhye, N., Bhardawaj, P., Deb, K.: Improving differential evolution through a unied approach. J.
Global Optim. 55, 771799 (2013)
19. Padhye, N., Deb, K., Mittal, P.: Boundary handling approaches in particle swarm optimization. In:
Proceedings of Bio-Inspired Computing: Theories and Applications (BIC-TA 2012), pp. 287298
(2013)
20. Price, K.V., Storn, R.M., Lampinen, J.A.: Differential Evolution: A Practical Approach to Global
Optimization. Springer-Verlag, Hiedelberg (2005)
21. Ragsdell, K.M., Phillips, D.T.: Optimal design of a class of welded structures using geometric pro
gramming. J. Manuf. Sci. Eng. 98(3), 10211025 (1976)
22. Reklaitis, G.V., Ravindran, A., Ragsdell, K.M.: Engineering Optimization Methods and Applications.
Willey, New York (1983)
References
123
890 N. Padhye et al.
23. Shigley, J.E.: Engineering Design. McGraw-Hill, New York (1963)
24. Storn, R., Price, K.: Differential evolutiona simple and efcient heuristic for global optimization
over continuous spaces. J. Glob. Optim. 11(4), 341359 (1997)
25. Stephen, P., Gere, J.M., Prager, W.: Theory of elastic stability. J. Appl. Mech. 29, 220 (1962)
26. Wang, Y., Cai, Z.: Combining multiobjective optimization with differential evolution to solve con
strained optimization problems. IEEE Trans. Evol. Comput. 16(1), 117134 (2012)
27. Wang, Y., Cai, Z.: A dynamic hybrid framework for constrained evolutionary optimization. IEEE Trans.
Syst. Man Cybern. Part B 42(1), 203217 (2012)
28. Zoutendyk, G.: Methods of feasible directions. Elsevier, Amsterdam (1960)
123
Springer Science+Business Media New York 2015