[ProQuest: [...] denotes non US-ASCII text; see PDF]
Academic Editor:Antonino Laudani
Key Lab of Structures Dynamic Behavior and Control of the Ministry of Education, Harbin Institute of Technology, Harbin 150090, China
Received 5 May 2016; Revised 22 August 2016; Accepted 4 September 2016
This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
Multiobjective optimization is widely used in many practical engineering problems. Instead of a single optimal solution, multiobjective optimization problem (MOOP), with conflicting subobjectives, provides a set of compromise solutions, which is known as Pareto optimal set [1]. In the Pareto optimal set, corresponding to Pareto front, no one solution can be considered to be better than any other, which means that they cannot be improved in one objective without degrading another [2]. There are two main alternative ways to obtain the Pareto optimal set. One way is to introduce a strategy based on weight coefficients that convert the multiple objectives into a monoobjective. Only an optimal solution corresponding to the defined weights can be obtained in a single run, so multiple optimization runs with variable objective weights are needed to obtain the solution set. Furthermore, this method cannot be used to find Pareto optimal solutions in problems having a nonconvex Pareto optimal front [3]. The second way enables obtaining Pareto optimal set in a single run and has been emphasized in recent years. As the basis of decision-making, the Pareto optimal set provides the decision maker with insight into the characteristics of the problem before choosing a final solution.
For most multiobjective optimization methods that can obtain the Pareto optimal set in a single run, attention is focused on preventing local optimal or designing individual sorting or fitness assignments. However, one ignored thing is when the influences of different parameters on the model are disparate, and it may be uneconomical to spend a lot of time on the secondary parameters. A wise approach is to give higher priority to those parameters with significant influence on the optimization objectives (a.k.a. "parameters with high sensitivity").
In this study, a new strategy which combines the methods of parameter sensitivity analysis and multiobjective optimization is proposed. In the process of optimization, parameter sensitivity is updated in real time with no extra analysis sample and then guides optimization by setting the parameter priority.
The rest of this paper consists of the following. Section 2 presents the drawbacks of conventional methods. Section 3 describes the multiobjective optimization method and the parameter sensitivity analysis method. Furthermore, some improvements are made for these basic methods. Then, Section 4 introduces the integrated algorithm. To validate the effectiveness of the proposed algorithm, Section 5 examines a mathematical problem, an airship envelope optimization, and a truss topology optimization. Finally, Section 6 provides the concluding remarks.
2. Problem Representation
MOOP is characterized by many optimization variables. Consider a multiobjective optimization model with five parameters. If each parameter is allocated 8 binary bits, hence the gene is 40 bits long, and its sample space reaches 240 [approximate]1012 . If this was to be calculated directly, the computational effort would be astronomical [4]. Even if 0.01% of the total amount is calculated, there are 100 million individuals. However, if after sensitivity analysis it is found that only three of the parameters have significant influence on the model, while the other two parameters have little influence, higher priority can be allocated to the first three parameters. A considerable amount of time can be saved by decreasing the time spent on the insensitive parameters. One extreme case is when the sensitivity of three parameters is 33.33% each, and the sensitivity of the other two parameters is 0%. Then the sample space is decreased to 224 [approximate]1.6×107 . The reduced sample space means the significant improvement of optimization efficiency.
To consider parameter sensitivity in optimization process, a traditional approach is to conduct a sensitivity analysis at the first step, then to ignore insensitive parameters and retain the sensitive parameters for optimization. However, this approach has two obvious defects:
(1) Complicated and Duplicated Calculations . A large amount of samples is needed in the global sensitivity analysis to obtain accurate parameter sensitivity. It is time-consuming to analyze these samples, and they cannot be utilized in the later optimization. When the sensitivity analysis is completed, the samples are discarded. So if the samples in the sensitivity analysis can be shared with the optimization process, considerable computing time can be saved.
(2) Single Choice . In the traditional approach, the answer to the question of whether a parameter should be set as an optimization variable is either "yes" or "no." For parameters with low sensitivity, but which cannot be ignored, there is no intermediate answer. Another situation for the multiobjective optimization model is that some parameters are very sensitive to objective function A, but insensitive to objective function B. In this case, to reduce the calculation effort, separate analysis of the two targeted analytical models, considering objective A and objective B separately, is necessary. This is not a good choice, not only because it is complex, but also because all the Pareto optimal solutions for all objectives cannot be obtained. In optimization process, creating two or more models should be avoided, or fatal differences may be unknowingly produced by the modifications.
In order to eliminate the above defects, a new strategy which integrates sensitivity analysis and optimization is proposed. In this strategy, the sample resources are shared between the sensitivity analysis and the optimization. Meanwhile, results of sensitivity analysis can be directly used to guide the optimization process.
3. Basic Analysis Methods
3.1. Optimization Method
3.1.1. Strength Pareto Evolutionary Algorithm (SPEA)
Evolutionary algorithms (EA), which are random exploring optimization algorithms based on the idea of the biological evolutionary, are widely used and well suited for MOOP to look for the global optimum [5-7]. Genetic algorithm [8] (GA), which simulates natural selection and survival of the fittest, is a typical example of EA. Many algorithms have been proposed based on the basic concept of the GA, such as Vector Evaluated Genetic Algorithm (VEGA) [9], Multiobjective Genetic Algorithm (MOGA) [10], Nondominated Sorting Genetic Algorithm (NSGA) [11], the Niched Pareto Genetic Algorithm (NPGA) [12], Strength Pareto Evolutionary Algorithm (SPEA) [13], Nondominated Sorting Genetic Algorithm II (NSGA II) [14], Pareto Envelope-based Selection Algorithm (PESA) [15], the Pareto Archived Evolution Strategy (PAES) [16], and Micro-Genetic Algorithm (Micro-GA) [17]. The core of the above methods is the GA and differences are primarily the selection mechanism and fitness evaluation [18]. Apart from the EA, swarm intelligence [19], which derived from the concept of the social and collective behavior, emerged recently and mainly includes Ant Colony Optimization (ACO) [20, 21] and Particle Swarm Optimization (PSO) [22-24].
In this paper, the SPEA is employed, which has been recommended as one of the most efficient multiobjective evolutionary algorithms [25, 26]. The characteristic of this algorithm is the definition of an external population P[variant prime] for storing the nondominated solution amongst all the solutions currently considered. The nondominated solutions within the entire search space constitute the Pareto optimal set. The basic steps of the algorithm are showed schematically in Figure 1. The steps of SPEA can be summarized as follows: initialization; updating the external set; fitness assignment; selection; crossover and mutation; termination [27].
Figure 1: Flowchart of SPEA.
[figure omitted; refer to PDF]
The first key technique of the SPEA is the calculation of clustering during updating of the external population. The nondominated solutions of each generation are stored in the external population. The size of the external population should be limited to avoid approaching infinity during the iteration. When the amount of the external population exceeds the limit capacity N, an elimination strategy, based on Fuzzy cluster analysis, is initiated. When the elimination strategy is executed, the first step is to obtain the distance between two clusters in the external nondominated solution set, then to merge the closest clusters into one. This process is repeated until the number of clusters reduces to below the limit capacity N. In this process, once two clusters are merged, the distances of the remaining clusters will be recalculated. The computational complexity is O(mn2 ), where n represents the actual individual amount of the external solution set, and m denotes the extra amount above the limit capacity of the external nondominated solution set.
Another key issue, namely, the calculation of individuals fitness, can be obtained as follows. Firstly, the external nondominated individual fitness is defined as a percentage of the individuals covered by it [3] and the percentage is expressed as a real value in (0,1) (also called strength) [13]. Subsequently, the fitness of the other individuals in the population P is defined as the sum of strengths of all external Pareto solutions by which it is covered. This fitness assignment ensures the diversity of the dominated set and the nondominated set [3].
A tournament selection mechanism is adopted to choose the individual, from both the external population P[variant prime] and the evolutionary population P, to take part in the evolution. Then the crossover and mutation operations are performed, to generate the next generation or to exit the optimization process if the generation counter exceeds the defined value.
3.1.2. Improvement of SPEA
The elimination strategy "Fuzzy cluster analysis" is initiated when the number of clusters exceeds the limit capacity N. In this process, the removed individual cannot be recycled to the external nondominated set. Comparatively speaking, some optimal individuals generated by later generation may be worse than those removed in previous generations. This means that the Pareto front will retreat [18]. A simple example of Pareto retreating is shown in Figure 2. In the former generation, individual B is removed because it is too close to individual A. In the next generation, a new individual B[variant prime] joins the nondominated set. However, B[variant prime] is dominated by B. In other words, if B still exists, B[variant prime] would never have the chance of being an optimal solution. In this way, the Pareto front retreats; that is to say, it cannot guarantee that the individuals of the external nondominated set are the optimal solutions. It does not meet the optimal concept and is more obvious in a linear model with a smaller capacity of the external population.
Figure 2: Recession of the Pareto front.
[figure omitted; refer to PDF]
To prevent this phenomenon, the SPEA should be modified slightly. (1) The external nondominated set should be stored orderly using dichotomy according to some objective function; (2) a mapped distance collection should be used to store the data of cluster distance to avoid duplicate calculation; (3) a backup collection of external nondominated solutions is adopted to recycle those removed individuals because of overcrowding. This is similar to the [straight epsilon]-Pareto front selection strategy proposed by Laumanns et al. [28]. With these improvements, the removed individuals have the chance to move back to the external nondominated set to guarantee no retreat of the Pareto front. Furthermore, the time complexity is decreased to O(nlog[...]n), in which n represents the size of the external nondominated set.
3.2. Sensitivity Analysis Method
3.2.1. Evaluation of Traditional Methods
Sensitivity analysis is used to qualitatively, or quantitatively, evaluate the influence of parameters on the output variables [29]. Multiparameter sensitivity analysis methods include local sensitivity analysis and global sensitivity analysis. Local sensitivity analysis obtains the influence of a parameter on the output variable by changing the parameter while other parameters remain unchanged. The essence of this method is single-parameter sensitivity analysis, which does not consider the correlation between parameters. What is more, the result is unstable for a nonlinear model. For the global sensitivity analysis, all the input parameters are varied to obtain the output variable; that is to say, correlation between parameters is considered. The traditional global sensitivity analysis methods include multivariate regression [30], Morris's method [31], Sobol's method [32], Fourier Amplitude Sensitivity Analysis [33], and Extended Fourier Amplitude Sensitivity Analysis [34].
However, these existing global sensitivity analysis methods cannot be directly embedded in optimization process. The main reasons are as follows:
(1) Requirement for samples: randomness and unbiasedness are two basic properties of the samples used for traditional sensitivity analysis methods. But the optimization process, based on the GA, can only provide biased samples which are tending towards the optimal set.
(2) Requirement for parameters: the analysis parameters of the traditional methods should follow certain rules. Assuming the sensitivities of parameters x1 ,x2 ,x3 to function f are analyzed by Sobol's method, the combined parameters x1 ,x2 ,x3 ; x1[variant prime] ,x2 ,x3 ; x1 ,x2[variant prime] ,x3 ; x1 ,x2 ,x3[variant prime] should be extracted from two sample groups, namely, (x1 ,x2 ,x3 ) and (x1[variant prime] ,x2[variant prime] ,x3[variant prime] ). But in the optimization process, the parameters (optimization variables) of each generation cannot meet the above law.
(3) Time-consumption: take Sobol's method as an example; two sample groups would include thousands of parameters, which implies thousands of combined parameters are needed to determine parameter sensitivity.
In mathematical statistics, parameter sensitivity can be considered to reflect the correlation between the input parameters and output variables. Therefore, the correlation concept is applied in sensitivity analysis to overcome the above disadvantages.
3.2.2. Rank Correlation Coefficient
The correlation coefficient can be used for sensitivity analysis, as mentioned in literatures [35, 36]. Related evaluation methods have linear correlation coefficients with fixed variable distance and rank correlation coefficients with fixed variable order, which is also called the sequential correlation coefficient.
In the present investigation, the Spearman Rank Correlation Coefficient (SRCC) is used. The concept of SRCC is inherited from the Pearson product-Moment Correlation Coefficient (PMCC) [37]. In statistics, they are frequently used as tools to analyze the correlation between the input variable X and the output variable Y. For PMCC, X-Y pairs must follow a normal distribution. However, this assumption is not feasible for each generation of optimization variables. SRCC obtains correlation coefficient based on the parameter rank rather than the raw value as PMCC. This operation is described as rank transformation. It linearizes monotonic nonlinear relationships between variables and reduces the effects of extreme values. This transformation converts the sensitivity measure from one of linearity to one of monotonicity and is widely used in parameter screening and sensitivity analysis of model output [35]. Furthermore, the sample distribution has no influence on the SRCC result [36], which makes the SRCC calculation feasible using optimization variables.
Monotonically increasing transformation invariance and robustness are two important characteristics of the rank correlation coefficient [38]. Monotonically increasing transformation invariance means that the value of the rank correlation coefficient is free from linear increase or nonlinear increase, as long as the variables transformation meets increasing trend. This is different from the linear correlation coefficient whose value is stable only when the transformation is increasing linearly. Assume that two variables x,y whose ranges are [0,1] satisfy uniform distribution and the relationship of x=y. Both linear correlation coefficient and rank correlation coefficient are 1.0. Some samples are extracted and transformed according to x1 =x and y1 =y3 . In this case, the linear correlation coefficient changes into 0.8985, but the rank correlation coefficient is still 1.0. Robustness means a strong impact resistant ability against abnormal actions. It can reproduce its prediction results (e.g., the order of importance of the input parameters) when repeating the analysis on different samples of the same population [36].
Rank is defined as the increasing (or descending) sort value of the raw parameters. If two parameters have the same sort value, an average value will be adopted. Table 1 gives a simple example of ranking.
Table 1: An example of ranking.
Raw Xi | Rank | Final rank xi |
9.1 | 1 | 1 |
7.0 | 2 | ( 2 + 3 ) / 2 = 2.5 |
2.6 | 4 | 4 |
7.0 | 3 | ( 2 + 3 ) / 2 = 2.5 |
Assume that there exist random variables X={X1 ,X2 ,X3 ,...,Xi } and Y={Y1 ,Y2 ,Y3 ,...,Yi }; correspondingly, the SRCC, namely, ρs , is given by [figure omitted; refer to PDF] where the xi ,yi ,x-,y- correspond to the ranks of the original value Xi ,Yi ,X-,Y- and x¯=(1/n)∑i=1nxi ; y¯=(1/n)∑i=1nyi .
3.2.3. Evaluation of SRCC
For the rank assignment strategy, the SRCC needs to be recalculated when a new individual is generated. For tens of thousands of analyzed individuals, SRCC recalculation is a heavy burden. An alternative approach is to apply dichotomy in sorting the rank, instead of average-rank-strategy for those duplicated variables, which can reduce the time complexity from O(n) to O(log[...]n), in which n is the size of analyzed individuals.
The value of SRCC ranges from -1.0 to 1.0, as shown in Figure 3. Greater absolute value of SRCC means greater influence of x on y.
Figure 3: Relationship of SRCC value versus variable distribution.
[figure omitted; refer to PDF]
Another characteristic of SRCC is that all the straight lines with different slopes have the same SRCC value, which is 1.0 or -1.0, as shown in Figure 4.
Figure 4: SRCC value of straight lines.
[figure omitted; refer to PDF]
According to the SRCC characteristic, the two straight lines with different slopes have the same sensitivity, as shown in Figure 5. In fact, when x1 and x2 change the same extent, Δy corresponding to the two curves is different. Based on the principle of single-parameter sensitivity analysis, the influences of x1 and x2 on the y are different. Superficially SRCC cannot confirm the true sensitivity. Fortunately, this embarrassing situation only occurs in the model with a single input parameter. For bivariate or multivariate models, the relationship between the insensitive input parameter x and the output variable y would be strongly affected by other sensitive parameters. In other words, the relationship between x and y will be evident in Figure 6, rather than a straight line with slope 0 corresponding to SRCC = 1.0.
Figure 5: Curves with different slope.
[figure omitted; refer to PDF]
Figure 6: Output y versus insensitive parameter x.
[figure omitted; refer to PDF]
The sensitivity based on the SRCC is decided as follows: firstly, obtaining an input parameter matrix by random variable technology; secondly, changing the input parameters simultaneously and obtaining the corresponding output variables; and thirdly, statistically analyzing the influence of the input parameters on the output variables. The essence of this method is global sensitivity analysis rather than local sensitivity analysis.
4. The Integrated Algorithm
Crossover probability (pc ) and mutation probability (pm ) are the two basic evolutionary parameters in EA. The smaller value of the two parameters implies the smaller probability to generate new genes [39, 40]. In the integrated algorithm SRCC-SPEA, the results of sensitivity analysis are used to guide the activity of the variables in the optimization process by modifying the pc and pm .
For an optimization model, which has four variables (A,B,C,D) and three objectives (f1 ,f2 ,f3 ), the modified pc and pm are obtained as follows:
(1) Extract the normalized sensitivity, which means that the sum of all the variables sensitivities of the specified objective is 1.0, as listed in Table 2.
(2) Sum the sensitivity of each variable. In this way, the influence of a variable on all the objectives is considered.
(3) It is well known that the value of evolutionary parameters should not be too large to avoid nonconvergence of the optimization process [39]. Therefore, the correction coefficient is set as normalized value based on the most sensitive variable to avoid the evolutionary parameters exceeding the defined value.
Table 2: A brief example of modifying evolutionary parameters.
Variables | [...]Sensitivity | Correction coefficient | Evolutionary parameters | ||||
f 1 | f 2 | f 3 | Sum | p c | p m | ||
A | 0.56 | 0.52 | 0.36 | 1.44 | 1.00 | 0.400 | 0.020 |
B | 0.13 | 0.30 | 0.32 | 0.75 | 0.52 | 0.208 | 0.010 |
C | 0.25 | 0.09 | 0.23 | 0.57 | 0.40 | 0.160 | 0.008 |
D | 0.06 | 0.09 | 0.09 | 0.24 | 0.17 | 0.068 | 0.003 |
Assume that the original global pc and pm are 0.400 and 0.020, respectively. The actual probability for each variable can be obtained by multiplying pc ,pm with their correction coefficient, as reported in Table 2. The most sensitive variable is A, whose correction coefficient is 1.0 and evolutionary parameters remain as the defined value. Variables with low sensitivity are assigned with low evolutionary parameters corresponding to smaller opportunity in following genetic evolution. Thus, more optimizing computation is allocated on the analysis of individuals with higher sensitivity, which can effectively accelerate the optimization process.
In this improved algorithm SRCC-SPEA, the individuals obtained from the optimization process serve as source samples for sensitivity analysis. The results of the SRCC provide the information for optimization priority of the variables. When the sensitivity analysis sample is small, SRCC results can be unstable and deviate far from the true values [36]. So, SRCC should achieve stabilization before guiding the optimization process. The SRCC can be safely considered as stable when the deviation between 5 consecutive generations is less than 5%. Figure 7 depicts the process of the SRCC-SPEA.
Figure 7: Process flowchart for integrated algorithm.
[figure omitted; refer to PDF]
5. Case Studies
Three cases are carried out to verify the practical applicability and superiority of the integrated algorithm SRCC-SPEA.
5.1. Mathematical Problem
In this section, a mathematical problem is used to verify the accuracy of optimization method SRCC-SPEA and sensitivity analysis method SRCC. The mathematical problem is defined as [figure omitted; refer to PDF]
All the evolutionary parameters are listed in Table 3.
Table 3: The evolutionary parameters.
Number of each generation | Number of external population | Max generation | Cross probability | Mutation probability |
50 | 50 | 50 | 0.40 | 0.02 |
The theoretical solution set of this model in the first quadrant is defined as f12 +f22 =1; f1 ,f2 ∈(0,1). From Figure 8, the Pareto optimal set of SRCC-SPEA shows a satisfactory agreement with theoretical result.
Figure 8: Comparison of Pareto optimal set between SRCC-SPEA and theory.
[figure omitted; refer to PDF]
Figure 9 shows the comparison of parameters sensitivity between SRCC and Sobol's method. The values obtained by the two methods are not too dissimilar and the sensitivity sort is in agreement. The result demonstrates the effectiveness of SRCC, used as sensitivity analysis, to guide the optimization. By checking the final data, it is found that x is relatively more sensitive than y. The correction coefficient of x and y is 1.0 and 0.19, so a higher evolution priority is allocated to x.
Figure 9: Comparison of parameter sensitivity between SRCC and Sobol's.
[figure omitted; refer to PDF]
5.2. Optimization of an Airship Envelope
A multiobjective optimization model of an airship envelope, whose geometry is shown in Figure 10, is proposed in this part.
Figure 10: Profile of a biaxial ellipsoid envelope.
[figure omitted; refer to PDF]
5.2.1. Variables and Constraints
The optimization variables are listed in Table 4, in which t, P, and E represent material thickness, differential pressure, and elastic modulus, respectively. Except for the constraints for the optimization variables, the envelope volume, which is directly related with the payload capacity and maximum operating altitude [41], is fixed as 12.6 × 104 m2 . The value of b is calculated based on the value of a1 and a2 .
Table 4: The optimization variables.
Optimization variables | Reference value | Constraints |
a 1 /m | 59.5 | 55.0~64.0 |
a 2 /m | 84.0 | 80.0-88.0 |
P /Pa | 400 | 350-450 |
t /mm | 0.20 | 0.10-0.30 |
E /Gpa | 12.0 | 11.0-13.0 |
5.2.2. Objectives
The objective function is expressed as min[...](f1 ,f2 ,f3 ), where f1 , f2 , and f3 represent volume of the material, structural strain energy, and maximum envelope stress, respectively. With the fixed volume density, which is 550 kg/m3 , volume of the material reflects the envelope self-weight. Structural strain energy and maximum envelope stress are defined to indicate the stiffness and ultimate strength of the envelope. The value of evolutionary parameters refers to Table 3.
5.2.3. Results
The three-dimensional Pareto optimal set of the fiftieth generation is shown in Figure 11. To further clarify the advantage of SRCC-SPEA, Figure 12 shows the Pareto optimal set of (f1 -f3 ) and Figure 13 shows the optimal parameters (t-E) corresponding to the Pareto optimal set. Obviously, the Pareto optimal set of SRCC-SPEA shows better uniformity than SPEA after the 50th iteration, which indicates that more iteration steps are needed for SPEA to obtain the better result.
Figure 11: Pareto optimal set of SRCC-SPEA and SPEA.
[figure omitted; refer to PDF]
Figure 12: Pareto optimal set (f1 -f3 ) of SRCC-SPEA and SPEA.
[figure omitted; refer to PDF]
Figure 13: Optimal parameters (t-E) of SRCC-SPEA and SPEA.
[figure omitted; refer to PDF]
The time-consumption, which relates to the computer capacity, is about 5 hours for both SPEA and SRCC-SPEA. But sensitivity of the optimization variables can also be obtained by SRCC-SPEA, and their correction coefficients are listed in Table 5. Theoretically, variables P and E have no influence on the volume of the material f1 . The sensitivities of P and E obtained by SRCC are 0.90% and 0.81%, which are nearly close to zero. The most sensitive variable is material thickness t, which indicates that optimization computation will focus on the evolution of t and less genetic opportunity is paid on other variables.
Table 5: Sensitivities and correction coefficients of the optimization variables.
| f 1 | f 2 | f 3 | Correction coefficient |
a 1 | 7.47% | 8.48% | 7.08% | 0.10 |
a 2 | 2.94% | 0.59% | 0.89% | 0.02 |
P | 0.90% | 17.36% | 9.17% | 0.12 |
t | 87.87% | 63.51% | 72.89% | 1.00 |
E | 0.81% | 10.05% | 9.96% | 0.09 |
5.3. Truss Topology Optimization
In this section, the proposed algorithm is demonstrated by a truss topology optimization with discrete variables. The truss simply supported at both ends (see Figure 14), has a length of 3 m and a regular triangle cross section with a side length of 0.3 m. The section of each rod is 0.006 m2 . A concentrated force 900 N is applied at the middle of top chord and material density is set as 1700 kg/m3 .
Figure 14: Schematic diagram of the truss.
[figure omitted; refer to PDF]
The optimization variables are the truss segment number N, truss type marked as TYPE, and nominal length of each segment chord. The value of N is set as 6, 7, 8, and 9. TYPE, whose value is listed in Table 6, is defined to describe the difference of segment number between three chords (A,B,C). The nominal length of segment chord is stored by three arrays: A(i), B(i), and C(i), i=1,2,3,...,9. The actual length of each chord is decided by the proportional weight of nominal length. The total number of optimization variables is 29.
Table 6: The value of optimization variable TYPE.
TYPE value | 0 | 1 | 2 | 3 |
Segment number |
|
|
|
|
A | N | N + 1 | N | N |
B | N | N | N + 1 | N |
C | N | N | N | N + 1 |
Two optimal objectives are minimizing deformation (f1 ) and minimizing weight (f2 ). The evolutionary parameters are listed in Table 7.
Table 7: The evolutionary parameters.
Number of each generation | Number of external population | Max generation | Cross probability | Mutation probability |
60 | 60 | 1000 | 0.40 | 0.02 |
Figure 15 displays the optimization process of SRCC-SPEA and SPEA. For the SRCC-SPEA method, the Pareto optimal set changes little after the 500th iteration. But for the SPEA method, deviation still exists between the 750th iteration and the 1000th iteration. This indicates that the terminated generation can be set as 500 and much time can be saved for SRCC-SPEA.
Figure 15: Optimization process of SRCC-SPEA and SPEA.
[figure omitted; refer to PDF]
Sensitivity of the 29 optimization variables to all the objectives can also be obtained, as listed in Table 8. It is found that the first two most sensitive variables to weight f2 are N and TYPE. Truss segment number N has the most obvious influence on deformation f1 . The nominal length of each segment chord along chord A has greater influence on deformation than the other two chords B and C. After modification, more optimization computation will focus on the evolution of N.
Table 8: Sensitivities and correction coefficients of the optimization variables.
Variables | f 1 | f 2 | Correction coefficient |
N | 13.63% | 18.95% | 1.00 |
TYPE | 5.03% | 10.13% | 0.47 |
A 1 | 0.19% | 2.00% | 0.07 |
A 2 | 1.28% | 5.14% | 0.20 |
A 3 | 11.52% | 6.66% | 0.56 |
A 4 | 3.23% | 0.95% | 0.13 |
A 5 | 9.89% | 3.78% | 0.42 |
A 6 | 12.58% | 0.19% | 0.39 |
A 7 | 6.51% | 1.32% | 0.24 |
A 8 | 3.44% | 1.06% | 0.14 |
A 9 | 3.76% | 2.20% | 0.18 |
B 1 | 0.30% | 4.05% | 0.13 |
B 2 | 0.59% | 4.08% | 0.14 |
B 3 | 2.56% | 2.03% | 0.14 |
B 4 | 3.04% | 2.64% | 0.17 |
B 5 | 0.08% | 1.18% | 0.04 |
B 6 | 1.09% | 1.21% | 0.07 |
B 7 | 3.17% | 0.18% | 0.10 |
B 8 | 0.81% | 3.14% | 0.12 |
B 9 | 3.09% | 2.01% | 0.16 |
C 1 | 2.30% | 2.79% | 0.16 |
C 2 | 2.20% | 3.88% | 0.19 |
C 3 | 2.49% | 5.85% | 0.26 |
C 4 | 1.08% | 1.10% | 0.07 |
C 5 | 0.61% | 0.67% | 0.04 |
C 6 | 1.99% | 0.63% | 0.08 |
C 7 | 1.83% | 7.69% | 0.29 |
C 8 | 1.29% | 0.73% | 0.06 |
C 9 | 0.43% | 3.76% | 0.13 |
The higher number of optimization variables implies the better effect of optimization computation reallocation, corresponding to the more obvious effectiveness of the integrated algorithm SRCC-SPEA. That is why the SRCC-SPEA obtains the even distributed Pareto optimal set much more quickly than SPEA for the truss topology optimization problem with 29 variables. What is more, conducting sensitivity analysis separately for 29 variables using traditional global sensitivity analysis method means considerable computational effort. But the time-consumption of SRCC-SPEA was almost equal to SPEA, which means lots of computational time would be saved.
6. Conclusions
In this paper, a novel integrated algorithm SRCC-SPEA was proposed based on the improvements of the optimization method SPEA and sensitivity analysis method SRCC. The elimination strategy "Fuzzy cluster analysis" of SPEA is improved to avoid the retreat of the Pareto front and reduce the time complexity. Dichotomy replaces the average-rank-strategy for SRCC rank assignment to reduce the time complexity.
In contrast with traditional evolutionary algorithm SPEA, the characteristics of SRCC-SPEA can be summarized as follows: (1) based on the results of sensitivity analysis, SRCC-SPEA effectively improves the survivability competence of sensitive variables by changing the evolutionary parameters; (2) SRCC-SPEA simultaneously obtains the parameter sensitivity and Pareto optimal set in a single run without extra samples, because the individuals of optimization work as samples of sensitivity analysis. Great computational cost can be saved compared to conducting sensitivity analysis and optimization analysis separately; (3) SRCC-SPEA obtains an even distributed Pareto optimal set more quickly than the SPEA, and this advantage is more obvious for optimization model with more variables.
Acknowledgments
The work has been supported by the National Natural Science Foundation of China (Grant no. 51678192).
[1] V. Pareto Manuale die Economia Politica , Societa Editrice Libraria, Milano, Italy, 1906.
[2] C. V. Lücken, B. Barán, C. Brizuela, "A survey on multi-objective evolutionary algorithms for many-objective problems," Computational Optimization & Applications , vol. 58, no. 3, pp. 707-756, 2014.
[3] M. A. Abido, "Multiobjective evolutionary algorithms for electric power dispatch problem," IEEE Transactions on Evolutionary Computation , vol. 10, no. 3, pp. 315-329, 2006.
[4] K. Deb, S. Agrawal, A. Pratap, T. Meyarivan, "A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II," Lecture Notes in Computer Science , vol. 1917, pp. 849-858, 2000.
[5] H. Zhu, J. Yang, W. Lu, J. Li, "Research on preference polyhedron model based evolutionary multiobjective optimization method for multilink transmission mechanism conceptual design," Mathematical Problems in Engineering , vol. 2016, 2016.
[6] K. Deb Multi-Objective Optimization Using Evolutionary Algorithms , John Wiley & Sons, New York, NY, USA, 2001.
[7] C. A. C. Coello, G. B. Lamont, D. A. Van Veldhuizen Evolutionary Algorithms for Solving Multi-Objective Problems , Kluwer Academic, Norwell, Mass, USA, 2002.
[8] J. H. Holland Adoption in Natural and Artificial System , MIT Press, 1975.
[9] J. D. Schaffer, "Multiple objective optimization with vector evaluated genetic algorithms," in Proceedings of the 1st International Conference on Genetic Algorithms, pp. 93-100, Pittsburgh, Pa, USA, 1985.
[10] C. M. Fonseca, P. J. Fleming, "Genetic algorithms for multiobjective optimization: formulationdiscussion and generalization," in Proceedings of the 5th International Conference on Genetic Algorithms, pp. 416-423, Morgan Kaufmann, 1993.
[11] N. Srinivas, K. Deb, "Multi-Objective function optimization using non-dominated sorting genetic algorithms," Evolutionary Computation , vol. 3, no. 2, pp. 221-248, 1994.
[12] J. Horn, N. Nafpliotis, D. E. Goldberg, "Niched Pareto genetic algorithm for multiobjective optimization," in Proceedings of the 1st IEEE Conference on Evolutionary Computation, pp. 82-87, Orlando, Fla, USA, June 1994.
[13] E. Zitzler, L. Thiele, "Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach," IEEE Transactions on Evolutionary Computation , vol. 3, no. 4, pp. 257-271, 1999.
[14] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, "A fast and elitist multiobjective genetic algorithm: NSGA-II," IEEE Transactions on Evolutionary Computation , vol. 6, no. 2, pp. 182-197, 2002.
[15] D. W. Corne, J. D. Knowles, M. J. Oates, "The pareto envelope-based selection algorithm for multiobjective optimization," Proceedings of the 6th International Conference on Parallel Problem Solving from Nature (PPSN '00), Paris, France, September 2000 , vol. 1917, pp. 839-848, Springer, 2000.
[16] J. Knowles, D. Corne, "The Pareto archived evolution strategy: a new baseline algorithm for Pareto multiobjective optimisation," in Proceedings of the IEEE Congress on Evolutionary Computation (CEC '99), vol. 1, pp. 98-105, Washington, DC, USA, July 1999.
[17] C. A. C. C. Coello, G. T. Pulido, "A micro-genetic algorithm for multiobjective optimization," in Proceedings of the 1st International Conference on Evolutionary Multi-Criterion Optimization (EMO '01), vol. 1993, of Lecture Notes in Computer Science, pp. 126-140, Zurich, Switzerland, March 2001.
[18] C. Xunxue Multiobjective Evolutionary Algorithms and Their Applications , National Defence Industry Press, Beijing, China, 2008.
[19] E. Bonabeau, M. Dorigo, G. Theraulaz Swarm Intelligence: From Natural to Artificial Systems , Oxford University Press, Oxford, UK, 1999.
[20] M. Dorigo, V. Maniezzo, A. Colorni, "Ant system: optimization by a colony of cooperating agents," IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics , vol. 26, no. 1, pp. 29-41, 1996.
[21] G. S. Pavani, R. I. Tinini, "Distributed meta-scheduling in lambda grids by means of Ant Colony Optimization," Future Generation Computer Systems , vol. 63, pp. 15-24, 2016.
[22] J. Kennedy, R. Eberhart, "Particle swarm optimization," in Proceedings of the IEEE International Conference on Neural Networks, vol. 4, no. 8, pp. 1942-1948, Perth, Western Australia, 1995.
[23] M. Pontani, B. A. Conway, "Particle swarm optimization applied to space trajectories," Journal of Guidance, Control, & Dynamics , vol. 33, no. 5, pp. 1429-1441, 2010.
[24] A. Laudani, F. R. Fulginei, A. Salvini, M. Schmid, S. Conforto, "CFSO3 : a new supervised swarm-based optimization algorithm," Mathematical Problems in Engineering , vol. 2013, 2013.
[25] D. A. Van Veldhuizen, G. B. Lamont, "Multiobjective evolutionary algorithms: analyzing the state-of-the-art," Evolutionary Computation , vol. 8, no. 2, pp. 125-147, 2000.
[26] J. Havel, H. Link, M. Hofinger, E. Franco-Lara, D. Weuster-Botz, "Comparison of genetic algorithms for experimental multi-objective optimization on the example of medium design for cyanobacteria," Biotechnology Journal , vol. 1, no. 5, pp. 549-555, 2006.
[27] M. A. Abido, J. M. Bakhashwain, "A novel multiobjective evolutionary algorithm for optimal reactive power dispatch problem," in Proceedings of the 10th IEEE International Conference on Electronics, Circuits and Systems (ICECS '03), vol. 3, pp. 1054-1057, IEEE, Sharjah, UAE, December 2003.
[28] M. Laumanns, E. Zitzler, L. Thiele, "On the effects of archiving, elitism, and density based selection in evolutionary multi-objective optimization," Lecture Notes in Computer Science , vol. 1993, no. 1993, pp. 181-196, 2001.
[29] M. Crosetto, S. Tarantola, "Uncertainty and sensitivity analysis: tools for GIS-based model implementation," International Journal of Geographical Information Science , vol. 15, no. 5, pp. 415-437, 2001.
[30] G. Manache, C. S. Melching, "Identification of reliable regression- and correlation-based sensitivity measures for importance ranking of water-quality model parameters," Environmental Modelling & Software , vol. 23, no. 5, pp. 549-562, 2008.
[31] M. D. Morris, "Factorial sampling plans for preliminary computational experiments," Technometrics , vol. 33, no. 2, pp. 161-174, 1991.
[32] I. Sobol', "On sensitivity estimation for nonlinear mathematical models," Keldysh Applied Mathematics Institute , vol. 1, no. 2, pp. 112-118, 1990.
[33] Y. Lu, S. Mohanty, "Sensitivity analysis of a complex, proposed geologic waste disposal system using the Fourier Amplitude Sensitivity Test method," Reliability Engineering and System Safety , vol. 72, no. 3, pp. 275-291, 2001.
[34] A. Saltelli, S. Tarantola, K. P.-S. Chan, "A quantitative model-independent method for global sensitivity analysis of model output," Technometrics , vol. 41, no. 1, pp. 39-56, 1999.
[35] D. M. Hamby, "A review of techniques for parameter sensitivity analysis of environmental models," Environmental Monitoring and Assessment , vol. 32, no. 2, pp. 135-154, 1994.
[36] A. Saltelli, I. M. Sobol', "About the use of rank transformation in sensitivity analysis of model output," Reliability Engineering and System Safety , vol. 50, no. 3, pp. 225-239, 1995.
[37] J. Wang, X. Liang Nonparametric Statistical Analysis , Higher Education Press, Beijing, China, 2006.
[38] W. Zx 2004 (Chinese) Copula and portfolio risk analysis [Ph.D. thesis] , University of Science and Technology of China, Hefei, China
[39] Z. Ming, S. Shudong 1999 (Chinese) Genetic Algorithms: Theory and Applications , National Defence Industry Press, Beijing, China
[40] X.-Z. Zhang, L. Chen, T. Quan, "Study of ascent trajectory for stratospheric airship in the jet stream," Electronic Design Engineering , vol. 16, no. 22, pp. 11-17, 2014.
[41] Q.-B. Wang, J.-A. Chen, G.-Y. Fu, D.-P. Duan, "An approach for shape optimization of stratosphere airships based on multidisciplinary design optimization," Journal of Zhejiang University: Science A , vol. 10, no. 11, pp. 1609-1616, 2009.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2016 Tiane Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract
For multiobjective optimization problems, different optimization variables have different influences on objectives, which implies that attention should be paid to the variables according to their sensitivity. However, previous optimization studies have not considered the variables sensitivity or conducted sensitivity analysis independent of optimization. In this paper, an integrated algorithm is proposed, which combines the optimization method SPEA (Strength Pareto Evolutionary Algorithm) with the sensitivity analysis method SRCC (Spearman Rank Correlation Coefficient). In the proposed algorithm, the optimization variables are worked as samples of sensitivity analysis, and the consequent sensitivity result is used to guide the optimization process by changing the evolutionary parameters. Three cases including a mathematical problem, an airship envelope optimization, and a truss topology optimization are used to demonstrate the computational efficiency of the integrated algorithm. The results showed that this algorithm is able to simultaneously achieve parameter sensitivity and a well-distributed Pareto optimal set, without increasing the computational time greatly in comparison with the SPEA method.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer