This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
`Spoken language has changed from the stage of mimicry to the formation of conventional language symbols, from the vagueness of information to the clarity of consensus, and from the narrow tribal communication to the social communication with both depth and breadth. Oral communication plays a key role in the interactive communication between individuals and society. Oral communication can not only cultivate individual consciousness but also participate in the shaping process of social development. The effectiveness of spoken language is not only in simple information interaction but also in knowledge transmission, cultural production, and emotional maintenance. Socrates’ sitting and discussing the Dao sowed the seeds of wisdom to the audience in the form of oral communication, and knowledge inheritance depended on the listener’s self-realization. The recitation and poetry culture generated by spoken language makes up for the impoverishment of individual spiritual culture. The oral language integrates with visual symbols in the form of presence to realize the experience of emotional integration.
1.1. The Balance Theory Crisis of Oral Communication under the Mapping of Internet
The emergence of written language makes language shift from oral tradition to secular power, resulting in the emphasis on spatial relations over temporal relations [1]. The interactive communication of spoken language between individuals could not be preserved in the past. The recording of human voice originated from the development of modern science and technology. The invention of the phonograph made the recording of voice no longer a myth. Recording sound has been a very popular behavior, and individuals also enjoy the sensory experience of hearing. Oral communication can be preserved under the construction of Internet technology, and the vertical communication of oral language can be realized in the dimension of time. McLuhan put forward the concept of a global village, and the distance of oral communication is no longer a problem. With the help of Internet technology, oral communication has broken through the bondage of distance and realized the horizontal leap of the spatial dimension of oral communication. Oral communication has broken through the limitations of time and space dimensions. Under the dominant discourse system of information and knowledge sharing, the monopolistic behavior of knowledge inheritance caused by oral communication tends to solidify. The popularization of knowledge is a normalized behavior. The Internet aggregates all kinds of knowledge, and individuals’ search for information on demand is a daily presentation. The monopoly of knowledge inheritance constructed by oral communication has been questioned in the new context. Similarly, the effect of oral communication in space diffusion is no less than that of paper, which plays a key role in the process of civilization because of its portability.
1.2. The Real and Virtual Distinction in the Field of Oral Interactive Communication
Traditionally, relying on written means (rather than eyes and ears), and visual art, architecture, sculpture, painting and other means (rather than relying on time and space) to express [2] under the influence of the media, the reality of oral decay seems to be impossible to verify, especially facing the visual era in the era of printed text and images. But in fact, oral communication has not died out. Oral communication and other media on which information transmission depends have created the same space of discourse expression. The existence of new media is not based on the premise of sacrificing the existence of spoken language but enables them to combine and form a harmonious situation. Spoken language is one of the modes of multi-information expression, and its practical utility still plays an important role in the daily life of individuals. The perceived happiness of communication is not limited to the acquisition of the latest information but to the emotional comfort of communicating with each other. The ways of information interaction and dissemination are diversified, and the ideographic nature of visual standard is highlighted in the presentation of diversified information. Real spoken interaction is actually quite different from virtual spoken interaction. The characteristics of present and nonpresent are as follows: in real oral interaction, people participate in the same field of communication mode, and the subjects and objects of communication can perceive each other’s subtle details and psychological changes. The spoken language interaction in the virtual space presents the surreal simulation of the communication field. In the local framing of the horizon, the individual cannot perceive the changes of objects outside the horizon. However, in the real oral communication environment, the oral communication environment is flexible, and the intervention of information transmission can be flexible in the face of unexpected situations. From the perspective of relationship composition, it is not difficult to construct the relationship between subject and object in real field communication, while the relationship construction of oral communication in a virtual environment has distinct directivity.
1.3. The Synergy and Information Heterogeneity of Nonverbal Symbols Are Reduced
“Nonverbal communication” refers to the process in which people exchange information with “nonverbal” behavior consciously or thought to be consciously in a specific environment [3]. As a medium, oral communication is not a single ideographic process but a clear description of the same object together with nonverbal symbols. These nonverbal signs include individual actions and expressions, and their main function is to facilitate the clarification of meaning. The coordination of oral communication and nonverbal signs builds a complete system together. The main body of information transmission is often different in explaining something or even blurring the fact itself, which affects the effect of interactive oral communication. Individual differences are shown on the basis of familiar semantic code, and their own grasp of semantic code cannot explain the fact itself, familiar semantics! When code blocks are in heterogeneous areas, nonverbal symbols play a key role with their unique advantages. Under the Internet media ecology, nonverbal symbols show the characteristics of richness. In the past, information circulation was transmitted through multiple levels. In the process of circulation, different levels of audiences interpreted information in various self-interpretation processes. In the process of interpretation, there are a lot of misinterpretations and misinterpretations. This phenomenon is the key factor that causes the deviation between oral communication and actual intention.
At present, a large number of literature works have promoted, improved, applied to multitask evolutionary computing, and achieved good results. The research on multitask evolutionary computing can be divided into four categories: extending multitask framework to a wider range of evolutionary algorithms, applying multitask evolutionary computing to multiobjective problems, applying multitask evolutionary computing to practical optimization problems, and proposing improved multitask evolutionary algorithms based on MFEA. For the study of the first type of problem, Wen and Ting [4] extended the concept of multitasking evolutionary computation to genetic programming (GP) and proposed multifactorial genetic programming (MFGP). Feng et al. [5] proposed multifactorial particle swarm optimization (MFPSO) and multifactorial difference algorithm (MFDE) algorithms based on PSO and DE. Yokoya et al. [6] proposed the multifactorial artificial bee colony algorithm (MFABC) and applied it to the optimization of automobile structure design. In view of the research of multitasking evolutionary computation in multiobjective problems, Fogel et al. [7] first proposed the multiobjective and multitask evolutionary algorithm (MO-MFEA) and verified the effectiveness of the algorithm in multiobjective optimization problems in reality. Gupta et al. [8] modeled the operation index optimization problem in the beneficiation process as a multiobjective and multitask problem and solved this problem by using improved MO-MFEA. For the third type of problems, multitask evolutionary computation has obtained good results in symbolic regression problem [9], biological network module identification problem [10], shortest path tree problem [11], and combinatorial optimization problem [12]. The fourth type of research is defect improvement of MFEA, mainly focusing on two major problems in MFEA: (i) how to adjust gene migration adaptively according to the similarity between tasks. Denoising autoencoders proposed [13] should be used to automatically construct mappings between tasks and complete gene migration through mapping. MFEA is based on the decomposition method [14] and resource allocation mechanism, which can dynamically adjust gene migration according to the similarity between tasks. (ii) How to make gene migration in MFEA play a role when the optimal solutions between tasks differ greatly. Bali et al. [15] proposed an adaptive strategy to solve this problem. Assuming that the algorithm simultaneously processes two optimization tasks of different difficulties, the strategy maps the optimization space of the low difficulty task to the optimization space of the high difficulty task, and the mapping and the similarity of the latter two tasks become higher, which can amplify the effect of gene migration. Gustafson and Burke [16] proposed the strategy of decision variable transformation, whose basic idea is to map individuals in different tasks to the same position in the normalized search interval before gene transfer.
This paper systematically introduces the multifactor evolutionary algorithm (MFEA), in 2 sections, gives the basic properties of MFEA in the multitask environment, and systematically analyzes the entire algorithm flow of MFEA. In Section 3, the benchmarking problems used in multitasking optimization are introduced in detail, and the performance of MEFA is analyzed by comparing MFEA with SOMA in benchmarking problems. Through the analysis, we found that MFEA could not solve the test problems with different subfunction dimensions well. In Section 3, we proposed an improved version of MFEA for such dimensional multitask optimization problems and applied it to the prediction problem of chaotic time series.
2. Multifactor Evolutionary Algorithm
This section focuses on multitask evolutionary computing. Multitask evolutionary computing is a new direction of evolutionary computing that has attracted much attention in recent years. When there is a similarity between tasks, evolutionary algorithms can be used to simultaneously optimize multiple tasks and achieve better results than single-task algorithms by sharing information between tasks through gene transfer. This section mainly introduces the first multitask evolutionary algorithm and multifactorial evolutionary algorithm. In Section 2, firstly, the mathematical definition of multitask optimization and the special properties of the evolutionary algorithm in multitask environment are given. Then, the whole algorithm flow of MFEA is analyzed in detail, and the selective intersection algorithm (assortative) in MFEA introduces mating and selective imitation in detail. In Section 2, MFEA is verified experimentally. Firstly, the benchmark test in multitask optimization is introduced, and then, the results of MFEA on the benchmark function are analyzed. Section 2 summarizes the content of Section 2.
2.1. Algorithm Analysis
The multifactor evolutionary algorithm (MFEA) is the first multitask evolutionary algorithm that can simultaneously optimize multiple problems through a single population. In this section, we first introduced the basic definition of MFEA and then analyzed the whole process of MFEA in detail.
2.1.1. Basic Definition
Here, we first define multitask optimization: consider K optimization tasks, denoted by
Definition 1 (factorial rank).
The adaptive values of all individuals in population P on task Tj form an adaptive array
According to the definition, we can know
Definition 2 (standard adaptive value, scalar fitness).
The standard adaptive value is the evaluation standard for the quality of an individual in a multitask environment. The standard adaptive value of individual pi is denoted as
From the definition, we can see that
Definition 3 (optimal factor, skill factor).
The optimal factor of individual pi is denoted as
Standard fitness can help us measure the merits and disadvantages of individual pi in multitask environment, but we still need to know the standard fitness
From the above three definitions, we can know that, different from a single-objective optimization problem, multitask optimization does not require an individual to obtain the optimal solution on all tasks. As long as an individual obtains the optimal solution on one task, the individual will achieve the optimal solution in the multitask environment. Multifactor evolutionary algorithm introduces the concept of multitask optimization to describe the above situation.
Definition 4 (multitask optimal).
The adaptive value of individual p on K tasks is
To test and verify the effectiveness of the above improvements, choose Sphere, Rosenbrock, Griewank, Ackley, and Rastrigin experiment the five classical test functions, and the function definition and variable scope are as follows:
(i) Sphere:
(ii) Rosenbrock:
(iii) Griewank:
(iv) Ackley:
(v) Rastrigin:
2.1.2. Algorithm Flow
The traditional single-objective evolutionary algorithm (SOEA) uses real or binary encoding. The main process of the algorithm is to generate offspring through crossover and mutation and then select excellent individuals for the next generation through selection operation. MFEA takes the traditional genetic algorithm as the prototype algorithm and extends the encoding mode and generation mode of offspring.
(1) Encoding-Decoding Method. Evolutionary algorithm is used in the vector to represent the individual in the population, and individual elements of vector are called “gene,” and in the same way, we will be a single vector called “chromosome.” MFEA uses single species and optimizes all tasks at the same time, but for different tasks, the search space of different dimensions may be different, and new coding methods need to be designed to map individuals to multiple tasks. Multifactor evolutionary algorithm adopts a new code-decoding method to solve this problem. In the coding stage, the search space of different tasks is linearly mapped to the uniform interval Y. It means that, in each dimension, the search space of different tasks is linearly compressed to the range of [0, 1]; if the dimensions between tasks are different, the highest dimension in all tasks is taken as a dimension
(2) Traditional Single-Objective Genetic Algorithm. It generates offspring through crossover mutation operator, but the generation mode of offspring is different in the multifactor evolutionary algorithm. The core of the multifactor evolutionary algorithm is to make use of the similarity between tasks and carry out implicit gene transfer between different tasks through crossover operators to speed up the convergence of tasks. Assortative mating is used to generate offspring in the multifactor evolutionary algorithm. The pseudocodes of the assortative mating algorithm are shown in Table 1.
Table 1
Selecting the cross algorithm pseudocodes.
Algorithm 1: select the crossover algorithm |
Input: two parents |
(1) if |
Parent |
(2) else |
Parent |
Parent |
(3) end if |
Output: children C1 and C2 |
Selective crossover algorithm is the core of multitask evolutionary computing, which uses crossover operators to transfer genes between different tasks. Let the optimal factor of an individual P be t, indicating that P is optimal on task T. All individuals in population P whose optimal factor is 1 together constitute the candidate solution of task T. The purpose of selecting a crossover algorithm is to communicate with other tasks on the premise that the distribution of the candidate solution of task T will not change too much. In the selection crossover algorithm, if the two parent optimal factors are the same or
Table 2
Pseudocodes of selective imitation algorithms.
Algorithm 2: selective imitation algorithm |
Input: child individual C generated by two parents |
(1) if C is generated by two parents then |
(i) if |
C directly inherits the optimal factor of parent |
(ii) else |
C directly inherits the optimal factor of parent |
(iii) end if |
C directly inherits the optimal factor of its parent |
(2) else |
C directly inherits the optimal factor of its parent |
(3) end if |
Output: child C with the optimal factor attribute |
Other operations of the multifactor evolutionary algorithm are the same as those of the traditional genetic algorithm, and the pseudocodes of the overall algorithm are shown in Table 3. Firstly, the population is randomly generated, and the adaptive value is evaluated on all tasks to obtain the individual factor grade and optimal factor attribute. The above steps are the initialization steps. After initialization, the algorithm iteration begins. Firstly, children are generated according to the selective crossover algorithm. In RMP, two parents with different optimal factors complete gene transfer between different tasks through crossover operators. Then, selective imitation operation was carried out on the generated progeny in order to determine the optimal factor of the progeny individual according to the parent. After determining the optimal factor of the offspring, only the adaptive value of the offspring is calculated on the optimal factor task, and then, the parent and offspring are combined to form the intermediate generation. At this point, the adaptive value of the individuals in the intermediate generation is known, and the factor grade, optimal factor, and standard adaptive value attribute of the individuals in the intermediate generation are updated. The standard adaptive value is taken as the measurement, and the optimal individuals are selected from the intermediate generation to enter the next cycle. Finally, the algorithm is iterated until the end of the algorithm.
Table 3
Pseudocode of multifactor evolutionary algorithm.
Algorithm 3: multifactor evolutionary algorithm (MFEA) |
(1) Initialize population P and calculate individual fitness on all tasks |
(2) Calculate individual factor grade (R) and optimal factor (T) |
(3) While (termination condition not met) do |
(i) Selecting cross-generation progeny population C (Algorithm 1) |
(ii) Run selective imitation algorithm in progeny population C (Algorithm 2) |
(iii) Only on task T, the adaptation value of offspring population C is calculated |
(iv) Generate intermediate generation population R, where |
(v) Update factor grade (R), optimal factor (T), and standard fitness (P) of individuals in intermediate population R |
(vi) According to the standard adaptive values, the optimal individuals in the intermediate population R were selected to form the next generation population P |
3. Experimental Results
In this section, the performance of multifactor evolutionary algorithm is verified by experiments. Firstly, the benchmark test problem used in the experiment is introduced, and then, the performance of multifactor evolutionary algorithm on the test problem is analyzed.
3.1. Benchmarking Issues
Different from the existing single-task optimization and multiobjective optimization problems, multitask optimization needs to design a new test problem. Reference [13] pointed out that the degree of overlap of global optimal values between tasks and the correlation between tasks had the greatest influence on the multitask problem. Multitask optimization problems can be divided into the complete intersection (CI), partial intersection (PI), and no intersection (ND) according to the degree of overlap of global optimal values between tasks. According to the correlation between tasks, it can be divided into three types: high similarity (HS), middle similarity (MS), and low similarity (LS). Single-objective multitask benchmarking problems are shown in Table 4.
Table 4
Single-objective multitask benchmarking problems.
Problem category | Function | Global optimal degree of overlap | ||
CI + HS | T = Griewank | 50 | Coincide | 1 |
T = Rastrigin_ | 50 | |||
CI + MS | T = Ackley | 50 | Coincide | 0.2261 |
T = Rastrigin | 50 | |||
CI + LS | T = Ackley | 50 | Coincide | 0.0002 |
T = Schwefei | 50 | |||
PI + HS | T = Rastrigin | 50 | Part of the overlap | 0.867 |
T = Sphere | 50 | |||
PI + MS | T = Ackley | 50 | Part of the overlap | 0.2154 |
T = Rosenbrock | 50 | |||
PI + LS | T = Ackley | 50 | Part of the overlap | 0.0725 |
T = Weierstrass | 25 | |||
NI + HS | T = Rosenbrock | 50 | No overlap at all | 0.9434 |
T = Rastrigin | 50 | |||
NI + MS | T = Griewank | 50 | No overlap at all | 0.3669 |
T = Weierstrass | 50 | |||
NI + LS | T = Rastrigin | 50 | No overlap at all | 0.0016 |
T = Schwefel | 50 |
Spearman’s rank correlation coefficient was used as a measure of similarity between tasks. It was assumed that individual X was decoded as
According to the overlap degree and
3.2. Result Analysis
In this section, the results of multifactor evolutionary algorithm (MFEA) and single-objective genetic algorithm (SOMA) are compared. The population size of MFEA is equal to 100, the final number of cycles is equal to 1000, and the random crossover probability is RMP-0.3. The crossover operation is selected to simulate a simulated binary crossover operator (SBX), and polynomial mutation operator is selected for mutation operation. Since MFEA optimizes two tasks simultaneously while SOMA optimizes only one task, the maximum number of iterations of SOMA is set to 500 to ensure the fairness of comparison, and other parameters are consistent with MFEA. Each algorithm is independently run for 50 times to eliminate the randomness of the results, and the final results are shown in Table 5. Convergence curves of modified SOMA and SOMA on the problem of full coincidence of global optimal values (CI) are shown in Figure 1.
Table 5
Results of MFEA and SOMA on the multitask benchmark problem.
Problem category | MFEA | SOMA | ||
CI + HS | 0.3493 (0.0480) | 189.5901 (39.2992) | 0.9014 (0.05675) | 419.7629 (61.8293) |
CI + MS | 4.6468 (0.5185) | 229.8366 (49.5841) | 5.4119 (1.7629) | 424.9846 (56.8671) |
CI + LS | 20.1471 (0.0528) | 3884.6405 (427.5915) | 21.1944 (0.0934) | 4240.0025 (517.8067) |
PI + HS | 557.7668 (73.8933) | 8.7799 (1.4670) | 425.6852 (51.1415) | 86.7612 (21.0913) |
PI + MS | 3.5587 (0.4635) | 704.5293 (261.5528) | 5.0311 (0.6787) | 29158.8343 (14301.4714) |
PI + LS | 20.0767 (0.0646) | 20.5621 (3.0864) | 5.0346 (0.8623) | 12.2019 (2.3042) |
NI + HS | 755.7619 (316.9677) | 233.2365 (70.0560) | 25339.6592 (11111.8147) | 434.1805 (54.4270) |
NI + MS | 0.4018 (0.0452) | 25.9959 (3.2959) | 0.9162 (0.0521) | 38.2774 (3.6352) |
NI + LS | 670.0172 (169.6736) | 3858.2066 (470.5808) | 435.0968 (51.5959) | 4364.407 (611.3337) |
[figures omitted; refer to PDF]
As can be seen from Table 5 and the convergence diagram, MFEA has an excellent performance in 6 questions (CI + HS, CI + MS, CI + LS, PI + MS, NI + HS, and NI + MS) and is superior to SOMA in terms of accuracy and convergence speed. In terms of PI + LS and NI + LS, MFEA does not perform well. Reference [13] certainly states that when the similarity between tasks becomes low, the effect of MFEA will become worse. As can be seen from Figure 1, modified SOMA is slightly worse than SOMA in NI + LS problem, and MFEA is far worse than SOMA in PI + LS problem. However, the R of Pi-LS problem is greater than that of NI + LS problem, so the performance of MFEA is not only affected by the similarity between tasks. It can be seen from Table 5 that the dimensions of the two tasks in PI + LS problem are different, and MFEA performs poorly in the multitask optimization problem with different dimensions. Convergence curves of modified SOMA and SOMA on global optimal partial coincidence (PI) are shown in Figure 2.
[figures omitted; refer to PDF]
The improved ant colony algorithm (IVRS + 20PT), ant colony algorithm combined with 20PT (AC0 + 20PT), and ant colony algorithm combined with artificial bee colony algorithm (ACO + ABC) with excellent results in recent years were selected as the comparison to verify the effectiveness of discrete lion colony algorithm. Table 4 shows the comparison results on six TSP problems. “__” indicates that there are no data in the original literature, and the boldface number indicates the best performance among the four algorithms. As can be seen from Table 4, the discrete lion colony algorithm is superior to the other three algorithms in both optimal and average solutions. For ACO + ABC algorithm, the discrete lion colony algorithm can improve the quality of solution within the range of [1%, 5%]. For ACO + 20PT algorithm, discrete lion group algorithm can improve the quality of Eil51, KroA100, and D198 problems, and especially for KroA100 problem, discrete lion group algorithm can improve the accuracy of about 10%. For IVRS + 20PT algorithm, discrete lion swarm algorithm can improve the solution accuracy by about 0.5% for Ei151, KroA100, and D198 problems.
4. Conclusion
This paper focused on the multifactor evolutionary algorithm (MFEA) and introduced the improved version of MEFA that is HD-MFEA. In the process, the basic properties of MFEA in the multitask environment were given. Meanwhile, a systematic analysis was carried out for the entire algorithm flow of MFEA. Moreover, the detailed introduction of benchmarking problems used in multitasking optimization was given. In addition, the comparison between MFEA and SOMA was carried out to evaluate their performances in benchmarking problems. As a result, MFEA was found to be unable to efficiently solve the test problems with different subfunction dimensions. To this end, i.e., for such dimensional multitask optimization problems, the improved version of MFEA was proposed. Finally, it was applied for the prediction problem of chaotic time series.
Acknowledgments
This study was supported by (1) Research Achievements of “Li Zhiqiang Technical Skills Master Studio” of Anhui Provincial School of Learning (Project no.: 2019dsgzs32) and (2) 2019 Linkage Report Research on Traditional Media and New Media under the Background of Media Integration on Humanities and Social Sciences in Anhui University (Project no.: SK2019A0966).
[1] A. E. Eiben, J. Smith, "From evolutionary computation to the evolution of things," Nature, vol. 521 no. 7553, pp. 476-482, DOI: 10.1038/nature14544, 2015.
[2] H.-G. Bever, H.-P. Schwefel, "Evolution strategies—a comprehensive introduction," Natural Computing, vol. 1 no. 1, 2002.
[3] D. Wierstra, T. Schaul, T. Glasmachers, Y. Sun, J. Peters, J. Schmidhuber, "Natural evolution strategies," Journal of Machine Learning Research, vol. 15 no. 1, pp. 949-980, 2014.
[4] Y.-W. Wen, C.-K. Ting, "Learning ensemble of decision trees through multifactorial genetic programming," Proceedings of the IEEE Congress on Evolutionary Computation (CEC),DOI: 10.1109/cec.2016.7748363, .
[5] L. Feng, W. Zhou, L. Zhou, S. W. Jiang, J. H. Zhong, B. S. Da, Z. X. Zhu, Y. Wang, "An empirical study of multifactorial PSO and multifactorial DE," ,DOI: 10.1109/cec.2017.7969407, .
[6] G. Yokoya, H. Xiao, T. Hatanaka, "Multifactorial optimization using artificial bee colony and its application to car structure design optimization," ,DOI: 10.1109/cec.2019.8789940, .
[7] L. J. Fogel, A. J. Owens, M. J. Walsh, "Intelligent decision making through a simulation of evolution," Behavioral Science, vol. 11 no. 4, pp. 253-272, DOI: 10.1002/bs.3830110403, 1966.
[8] A. Gupta, Y.-S. Ong, L. Feng, K. C. Tan, "Multiobjective multifactorial optimization in evolutionary multitasking," IEEE Transactions on Cybernetics, vol. 47 no. 7, pp. 1652-1665, DOI: 10.1109/tcyb.2016.2554622, 2017.
[9] J. Kennedy, R. Eberhart, "Particle swarm optimization," .
[10] J. H. Holland, "Genetic algorithms," Scientific American, vol. 267 no. 1, pp. 66-72, DOI: 10.1038/scientificamerican0792-66, 1992.
[11] E. Mininno, F. Neri, F. Cupertino, D. Naso, "Compact differential evolution," IEEE Transactions on Evolutionary Computation, vol. 15 no. 1, pp. 32-54, DOI: 10.1109/tevc.2010.2058120, 2011.
[12] M. Dorigo, G. Di Caro, "Ant colony optimization: a new meta-heuristic," .
[13] K. Deb, A. Pratap, S. Agarwal, T. Meyarivan, "A fast and elitist multiobjective genetic algorithm: NSGA-II," IEEE Transactions on Evolutionary Computation, vol. 6 no. 2, pp. 182-197, DOI: 10.1109/4235.996017, 2002.
[14] A. Gupta, Y.-S. Ong, L. Feng, "Multifactorial evolution: toward evolutionary multitasking," IEEE Transactions on Evolutionary Computation, vol. 20 no. 3, pp. 343-357, 2015.
[15] K. K. Bali, Y.-S. Ong, A. Gupta, P. S. Tan, "Multifactorial evolutionary algorithm with online transfer parameter estimation: MFEA-II," IEEE Transactions on Evolutionary Computation, vol. 24 no. 1, pp. 69-83, DOI: 10.1109/tevc.2019.2906927, 2020.
[16] S. Gustafson, E. K. Burke, "The speciating island model: an alternative parallel evolutionary algorithm," Journal of Parallel and Distributed Computing, vol. 66 no. 8, pp. 1025-1036, DOI: 10.1016/j.jpdc.2006.04.017, 2006.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright © 2022 Zhiqiang Li. This is an open access article distributed under the Creative Commons Attribution License (the “License”), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. https://creativecommons.org/licenses/by/4.0/
Abstract
Visual orientation seems to indicate the decline of oral communication, but oral communication has its own living space under the new media ecology. Research has found that in the digital media era, voice communication is manifested as a single-level feature that simulates current interaction and information communication. Although voice communication is a lie constructed by individuals, the interaction between the subject’s discourse and the actual field of interaction separate the emotional distance, but the situation is harmonious and inclusive. The following voice communication and new media technologies are still trustworthy. Aiming at multifactor evolutionary algorithm (MFEA), the most classical multifactor evolutionary algorithm in multitask computation, we theoretically analyze the inherent defects of MFEA in dealing with multitask optimization problems with different subfunction dimensions and propose an improved version of the multifactor evolutionary algorithm, called HD-MFEA. In HD-MFEA, we proposed heterodimensional selection crossover and adaptive elite replacement strategies, enabling HD-MFEA to better carry out gene migration in the heterodimensional multitask environment. At the same time, we propose a benchmark test problem of multitask optimization with different dimensions, and HD-MFEA is superior to MFEA and other improved algorithms in the test problem. Secondly, we extend the application scope of multitask evolutionary computation, and for the first time, the training problem of neural networks with different structures is equivalent to the multitask optimization problem with different dimensions. At the same time, according to the hierarchical characteristics of neural networks, a heterodimensional multifactor neural evolution algorithm HD-MFEA neuro-evolution is proposed to train multiple neural networks simultaneously. Through experiments on chaotic time series data sets, we find that HD-MFEA neuro-evolution algorithm is far superior to other evolutionary algorithms, and its convergence speed and accuracy are better than the gradient algorithm commonly used in neural network training.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer