1. Introduction
Epidemiological models play a vital role in understanding the spread and severity of a pandemic (or epidemic) of infectious diseases [1]. During an outbreak of an infectious disease, it is crucial to simulate the potential outbreak growth for planning the outbreak control measures in order to provide useful insights into measurable outcome of existing interventions, predictions of subsequent growth, risk estimations, and guiding alternative interventions [2,3,4]. Epidemiological constraints, such as delays in symptom appearance (due to incubation period) and positive test confirmation (due to limited testing and detection resources), may limit the real-time use of epidemiological models [5,6]. In order to overcome such constraints, mathematical modeling of infectious diseases was employed in epidemiology, as recognized by WHO [7] and proven to be effective [8,9]. Compartmental modeling as a class of mathematical modeling was widely applied to infectious diseases modeling [10]. The Susceptible-Infectious-Removed (SIR) model is the first compartmental modeling approach for simulating the probable outbreak trajectory [11]. Besides the standard model, various extensions of SIR were developed in the recent past, mostly by including additional compartments, such as the Susceptible-Exposed-Infectious-Removed (SEIR) model. During the current COVID-19 global pandemic crisis, many studies (e.g., [12,13,14,15,16,17,18]) applied the SIR model and its extensions to analyze the dynamics of the disease. A recent review on SIR family of models used for studying, predicting, and managing COVID-19 is presented in [19].
1.1. The Susceptible-Infectious-Removed (SIR) Model
Despite the development of various extensions of the SIR model, the standard SIR model remains the preferred first approach for analyzing the spreading of an infectious disease (especially in the beginning or first phase) and it is reasonably predictive [11]. The SIR model splits a given population N into three compartments (non-intersecting classes) at any given time t (measured in days) namely, (i) susceptible (not yet infectious and disease free individuals at t) denotedSt, (ii) infectious (confirmed or isolated individuals) denotedIt, and (iii) removed (no longer infectious or dead) denotedRt . The number of individuals in each compartment vary over time. In general, the dynamics is described with a large number of susceptible individuals at the beginning, since the entire population that is not infected is considered to be susceptible, while infectious individuals remain low at the beginning of a pandemic. At subsequent times, the number of infectious individuals will increase, the number of removed individuals will begin to gradually increase, and the number of susceptible individuals will decrease. Finally, towards the end of a pandemic, the number of removed individuals will increase, the number of infectious individuals will decrease gradually, while the number of susceptible individuals will remain the lowest. The disease dynamics according to SIR model can be visualized, as in Figure 1. The variation rate over time in each compartment is modeled using a system of non-linear ordinary differential equations (ODEs),
dSdt=−βSIN,dIdt=βSIN−γI,dRdt=γI.
The primary assumption of SIR model is that the population is closed and fixed, and it is the sum of individuals in all of t compartments i.e.,N=S+I+Rfor all t. The epidemiological parameters of interest are, i) the transmission rateβand ii) the recovery rateγ. Accordingly, the average transmission (from an infectious individual to a susceptible individual) period is1/βdays and the average infectious period is1/γdays. Estimation of the SIR parameters is a critical task, since there is no closed-form analytical solution to the SIR model. Numerical approximation methods, such the Runge–Kutta methods, are often employed in order to solve the SIR model with estimated parameters. Thus, the quality of the simulation heavily relies on the estimates of the epidemiological parameters. Apart from that, a good estimate of the parameters is crucial in assessing the transmissibility of an infectious diseases in real-time through the effective reproduction numberRt. One of the approaches for inferringRtis by using compartmental epidemiological models, in whichRtis treated at the deflation of the basic reproduction numberR0 [20], which can be estimated using the relationship among the epidemiological parameters [21] that is given by:
R0≈βγ.
1.2. Determining the SIR Parameters
The surging interest in infectious diseases modeling due to the COVID-19 pandemic has led to a different approach of compartmental modeling from the usual mathematical modeling strategy to statistical modeling strategy [22]. In the latter strategy, the model parameters are estimated instead of being specified by or adapted from certain subjective prior information as in the former strategy (usually from previous studies, previous pandemic or values estimated by WHO). The estimation of the SIR model parameters is essentially an optimization problem attempting to find a model that best fits the data. The estimated parameters are then used along with any numerical approximation methods to obtain the simulated SIR compartments’ trajectories. A least squares loss function in terms of the simulated and observed trajectories, such as the Sum of Squared Error (SSE), is usually applied to quantify the discrepancy that arises from the simulation. Hence, the objective of the optimization is to minimize the loss function by estimating the parameters that lead to the best fit curve through any standard optimization techniques.
Bayesian approach is becoming the popular optimization and estimation tool, as evidenced by the studies concerning COVID-19 pandemic. This approach is commonly employed by calibrating the available data while using Markov Chain Monte Carlo (MCMC) method with Metropolis–Hastings (MH) algorithm sampling as implemented in [6,12,13,14,18,23] to obtain posterior estimates and credible intervals of the epidemiological parameters. Although the estimation of the model parameters is an obvious optimization problem, the metaheuristics family of optimization techniques received very little attention in epidemiological modeling. The first metaheuristic algorithm used in estimating the parameters in ODEs in general, and infectious diseases specifically, is the Genetic Algorithm (GA) (e.g., [24,25,26]). Recently, the Particle Swarm Optimization (PSO) algorithm is implemented in order to estimate the parameters in ODEs governing the SIR model variants, as presented in [27]. As for COVID-19 pandemic, very few studies applied metaheuristic algorithms for estimating the epidemiological model parameters, such as GA in [28], PSO in [29,30], Stochastic Fractal Search in [31], Marine Predators Algorithm in [32], and Flower Pollination Algorithm with Salp Swarm Algorithm in [33]. In this regard, we are interested in applying a metaheuristic algorithm, namely the Harmony Search and its variants, to the optimization problem of estimating the SIR model parameters.
This paper is organized, as follows. Section 2 details the Harmony Search algorithms that were used in this study. Section 3 presents the experimental setup of estimating the epidemiological parameters of SIR model while using Harmony Search algorithms. Section 4 provides the simulation results with detailed discussions. Finally, Section 5 provides the conclusion and possible future works of this study.
2. Harmony Search Algorithm and Selected Variants
The Harmony Search (HS) algorithm [34] is a well-known population-based metaheuristic algorithm. The optimization process in HS is a mimicry of the underpinning principles of jazz music orchestra, where musicians attain a pleasant harmony through several improvisation steps. HS has been successfully applied to a wide variety of real-world optimization problems, such as system reliability, robot path planning, renewable energy systems, hyper-parameter tuning of deep neural networks, intelligent manufacturing, and credit scoring (see [35,36]); university timetabling, structural design, water distribution, and supply chain management (see [37,38]); and, music composition, Sudoku puzzle solving, tour planning, web page clustering, vehicle routing, dam scheduling, groundwater modeling, soil stability analysis, ecological conservation, heat exchanger design, transportation energy modeling, satellite heat pipe design, medical physics, medical imaging, RNA structure prediction, and image segmentation (see [39], among others). Besides that, the implementation of HS in various parameter estimation studies indicated the potentiality of HS as an effective parameter estimation tool. Some of the notable parameter estimation problems that applied HS include parameter estimation of the nonlinear Muskingum model [40,41], parameter estimation in vapor-liquid equilibrium modeling [42], parameter estimation in an electrochemical lithium-ion battery model [43], parameter identification of synthetic gene networks [44], and design storm estimation from probability distribution models [45]. In addition, HS was also successfully employed in human activity pattern modeling, such as disease spread and disaster response [46]. Hereinafter, the term HS represents the family of HS variants, while the standard HS is denoted SHS. The five primary steps in SHS, as outlined in [47], are as follows:
-
Initialization of HS parameters and the objective function. The control parameters are Harmony Memory Size HMS, Harmony Memory Considering Rate HMCR, Pitch-Adjusting Rate PAR, bandwidth (now known as fret width) BW, and maximum improvisations MaxImp. Iff(·)is the objective function with n decision variablesx=(x1,...,xn)in the range(LBi,UBi), then the continuous optimization problem can be formulated as follows:
minimizef(x)s.txi∈(LBi,UBi).
-
Initialization of Harmony Memory (HM). HM is aHMS×ndimensional matrix that consists of randomly generated harmonies (candidate solutions) from the uniform distributionU(0,1)within the ranges of the decision variables. In general, it is more convenient to represent HM as an augmented matrix of orderHMS×(n+1), as follows:
HM= x11⋯xn1f(x1)⋮⋱⋯|⋮x1HMS⋯xnHMSf(xHMS),
where each row of the HM represents the solution vectorxj,(j=1,2,...,HMS)in the firstn−columns, followed by the fitness that is generated from the solution vector,f(xj). It is also common to have HM as a sorted matrix in the ascending order of the fitness value (the last column of HM).
-
Improvisation. Improvisation is performed to generate a new harmony by exploring and exploiting the search space. Thus, a new harmony is randomly selected from the HM with a probability ofHMCRor it is randomly generated outside of HM with probability1−HMCR. If a new harmony is obtained from HM, then the harmony may be improvised by adjusting the harmony with neighborhood values that are based onBWwith probability ofPARor remain as is with probability1−PAR. Note thatHMCRis inversely proportional to the explorative power of different search spaces, whilePARis directly proportional to the exploitation power of local search space.
- Update HM. New harmony from Step 3 is evaluated with the objective function to obtain the new fitness. If the new fitness is lower than the worst fitness, then the worst solution in HM will be replaced with the new harmony.
- Termination. Repeat Steps 3 and 4 until MaxImp has been reached or other termination criteria are satisfied.
The complete details of SHS are provided as in Algorithm 1. SHS is designed with fewer mathematical operations and it is relatively easy to code, but easily applicable to a wide variety of optimization problems. The advantages of HS are discussed in [37]. Over the two decades since the inception of HS, many HS variants have been developed to date. A majority of the variants modifies the improvisation procedures either by internal modification or hybridization with other heuristics. Some of the recent comprehensive reviews on HS variants are presented in [37,39,47,48,49]. In this study, only the internally modified HS variants were considered and selected on a minimal parameter setting requirement basis.
Algorithm 1: Standard Harmony Search | |
1: SetHMS,HMCR,PAR,BW, and MaxImp | |
2:xij=LBi+U(0,1)×(UBi−LBi),∀i=1,2,...,nand∀j=1,2,...,HMS | % generate HM |
3: Computef(xj),∀j=1,2,...,HMS | % compute fitness |
4: while(t≤MaxImp)or (any stopping criterion) do | |
5: for eachi∈[1,n]do | |
6: ifU(0,1)≤HMCRthen | |
7: xi′=xijwherej∼U(1,HMS) | % memory consideration |
8: ifU(0,1)≤PARthen | |
9: xi′=xi′+(2×U(0,1)−1)×BW | % pitch adjustment |
10: end if | |
11: else | |
12: xi′=LBi+U(0,1)×(UBi−LBi) | % random generation |
13: end if | |
14: end for | |
15: iff(x′)<f(xworst)then | |
16: replacexworstinHMwithx′ | |
17: end if | |
18: end while |
2.1. Improved Harmony Search (IHS)
The Improved Harmony Search (IHS) [50] is the prototypical HS variant. While still requiring to fine-tune theHMCRparameter, the parametersPARandBWare made dynamic in this variant with the introduction of thePARmax(BWmax) andPARmin(BWmin ) iterative parameters. Although IHS has shown better performance than SHS, this variant increased the burdensome process of setting suitable values for four parameters instead of just two in SHS [51]. Nevertheless, IHS is considered to be a breakthrough that paved the way for the development of various HS variants to date.PARandBWare adjusted at each iteration while using:
PARt=PARmin+PARmax−PARminMaxImp×t,
BWt=BWmax×(BWminBWmax)tMaxImp.
The computational procedure of IHS is provided, as in Algorithm 2.
Algorithm 2: Improved Harmony Search | |
1: SetHMS,HMCR,PAR using (5),BW using (6), and MaxImp | |
2:xij=LBi+U(0,1)×(UBi−LBi),∀i=1,2,...,nand∀j=1,2,...,HMS | |
3: Computef(xj),∀j=1,2,...,HMS | |
4: while(t≤MaxImp)do | |
5: for eachi∈[1,n]do | |
6: ifU(0,1)≤HMCRthen | |
7: xi′=xijwherej∼U(1,HMS) | |
8: ifU(0,1)≤PARtthen | |
9: xi′=xi′+(2×U(0,1)−1)×BWt | |
10: end if | |
11: else | |
12: xi′=LBi+U(0,1)×(UBi−LBi) | |
13: end if | |
14: end for | |
15: iff(x′)<f(xworst)then | |
16: replacexworstinHMwithx′ | |
17: end if | |
18: end while |
A year later, a new variant that was inspired by the swarm intelligence concept of the PSO was introduced by the principal developer of IHS. This variant is known as Global Harmony Search (GHS) [52] and aims to mimic the best harmony in the HM. The parameterPARis adapted from IHS, whileBWis removed. Thus, the pitch adjustment step in SHS is replaced with a random selection of best harmony of any decision variable from the HM, as follows:
xi′=xkbest,i∈[1,n],k∼U(1,n).
In general, GHS is claimed to perform better than SHS and IHS, especially in high-dimensional optimization problems. However, [53] asserted that GHS has flaws that will cause premature convergence and the name of this variant is also said to be misleading. The most serious flaw, as noted by [51], is the frequent generation of infeasible new harmonies, whenever the upper and lower bounds of each decision variable are not identical in the given optimization problem. Hence, GHS is not considered in this paper.
2.2. Novel Global Harmony Search (NGHS)
The Novel Global Harmony Search (NGHS) [54] adapts the swarm intelligence concept of the PSO algorithm in the improvisation step of the SHS. This approach enables the new harmony to mimic the global-best harmony in the HM. Thus, theHMCRandPARparameters are removed and the improvisation is only dependent on the best and worst harmonies in HM. The random generation of harmony is, in fact, analogous to SHS, and the only difference is that, instead of randomly selecting a harmony with the probability of1−HMCR, NGHS performs genetic mutation with the probability ofPm, based on the ideas from evolutionary algorithms. NGHS is presented, as in Algorithm 3.
Algorithm 3: Novel Global Harmony Search | |
1: SetHMS,Pm, and MaxImp | |
2:xij=LBi+U(0,1)×(UBi−LBi),∀i=1,2,...,nand∀j=1,2,...,HMS | |
3: Computef(xj),∀j=1,2,...,HMS | |
4: while(t≤MaxImp)do | |
5: for eachi∈[1,n]do | |
6: xR=2×xibest−xiworst | |
7: xi′=min(max(xi′,LBi),UBi) | |
8: xi′=xiworst+U(0,1)×(xR−xiworst) | % position updating |
9: if(U(0,1)≤Pm)then | % genetic mutation |
10: xi′=LBi+U(0,1)×(UBi−LBi) | |
11: end if | |
12: end for | |
13: iff(x′)<f(xworst)then | |
14: replacexworstinHMwithx′ | |
15: end if | |
16: end while |
2.3. Self-Adaptive Global Best Harmony Search (SGHS)
The Self-Adaptive Global Best Harmony Search (SGHS) [55] aims to improve the GHS [52] in terms of avoiding getting trapped at local optima. In this approach,HMCRandPARare dynamically adjusted to a suitable range after a number of iterations by tracking their previous values that allowed for the replacement of new harmony in HM. Further,HMCRandPARare assumed to be normally distributed whereHMCR∼N(HMCRm,0.01),HMCR∈[0.9,1.0]andPAR∼N(PARm,0.05),PAR∈[0.0,1.0]. The initial values ofHMCRmandPARmare set at 0.98 and 0.9, respectively. Subsequently, SGHS begins withHMCRandPARvalues being generated from the Normal distribution. During each iteration, the values ofHMCRandPARthat corresponds to a replacement of new harmony in HM is recorded until a number of solutions are generated within the specified learning periodLP. OnceLPis reached, the recordedHMCRandPARvalues in previous iterations are averaged to obtain newHMCRmandPARmto be used in upcoming iterations. This process is repeated until the termination criterion is satisfied. As forBW, the values are dynamically adapted, as follows:
BWt=BWmax−BWmax−BWminMaxImpt<MaxImp2BWmint≥MaxImp2.
SGHS is outlined, as in Algorithm 4.
Algorithm 4: Self-Adaptive Global Best Harmony Search | |
1: SetHMS,HMCRm,PARm,BW using (8),LP, and MaxImp | |
2:xij=LBi+U(0,1)×(UBi−LBi),∀i=1,2,...,nand∀j=1,2,...,HMS | |
3: Computef(xj),∀j=1,2,...,HMS | |
4: Initialize solution counterlp=1 | |
5: GenerateHMCRandPARbased onHMCRmandPARm | |
6: while(t≤MaxImp)do | |
7: for eachi∈[1,n]do | |
8: ifU(0,1)≤HMCRthen | |
9: xi′=xij±U(0,1)×BWtwherej∼U(1,HMS) | |
10: ifU(0,1)≤PARthen | |
11: xi′=xibest | |
12: end if | |
13: else | |
14: xi′=LBi+U(0,1)×(UBi−LBi) | |
15: end if | |
16: end for | |
17: iff(x′)<f(xworst)then | |
18: replacexworstinHMwithx′ | |
19: record the values ofHMCRandPAR | |
20: end if | |
21: iflp=LPthen | |
22: recomputeHMCRmandPARmby averaging the recorded values ofHCMRandPAR | |
23: resetlp=1 | |
24: else | |
25: lp=lp+1 | |
26: end if | |
27: end while |
2.4. Intelligent Tuned Harmony Search (ITHS)
Based on the idea of sub-population approach for optimization (such as [56]) and decision-making theory, the Intelligent Tuned Harmony Search (ITHS) [57] attempts to intelligently control the exploration and exploitation in HS based on consciousness or previous experience. This approach begins by assigning thexbestas the leader of the whole population. The leader divides the HM into two groups (sub-populations), say Group I and Group II, in order to achieve a good balance between exploration and exploitation. Group I consists of the harmonies with fitness less than or equal to average fitness and Group II vice-versa. In that sense, Group I will undergo both exploration and exploitation stages, while Group II will only undergo exploration stage. ITHS uses the same adaptation of dynamicPAR as the Self-Adaptive Harmony Search (SAHS) [53] that is given by:
PARt=PARmax−(PARmax−PARmin)×tMaxImp.
Algorithm 5 presents the computational steps of ITHS. 2.5. Novel Self-Adaptive Harmony Search (NSHS)
The Novel Self-Adaptive Harmony Search (NSHS) is a HS variant that was developed by [51], being inspired by the defects that the creator found in SHS and other variants, namely IHS [50], GHS [52], SAHS [53], Dynamic Local Harmony Search (DLHS) [56], and SGHS [55]. In NSHS, theHMCRparameter is constructed based on the dimension of the optimization problem to be solved,
HMCR=1−1n+1.
With reference to Equation (10),HMCRis set to be directly proportional to n, in order to use the HM more frequently, and it lies in the interval(0.5,1). The parameterPARis removed. Furthermore, a dynamic fine-tunedBWis introduced and it depends on the standard deviation S of the objective function,fstd=S(f(xj)),∀j=1,2,...,HMS.BWdiminishes in stages according to the iteration number t, while increasing with a larger range of decision variables. The improvisation step in NSHS generates a new harmony within the narrow range of[xiworst,xibest]based on conditions ofHMCRand fstd. Algorithm 6 outlines the computational procedure of NSHS.
Algorithm 5: Intelligent Tuned Harmony Search | |
1: SetHMS,HMCR,PAR using (9), and MaxImp | |
2:xij=LBi+U(0,1)×(UBi−LBi),∀i=1,2,...,nand∀j=1,2,...,HMS | |
3: Computef(xj),∀j=1,2,...,HMS | |
4: while(t≤MaxImp)do | |
5: for eachi∈[1,n]do | |
6: ifU(0,1)≤HMCRthen | |
7: xi′=xijwherej∼U(1,HMS) | |
8: ifU(0,1)≤PARtthen | |
9: fmean=mean(f(xj)) | |
10: if(f(xi′)≤fmean)then | %Group 1 |
11: if(U(0,1)≤0.5)then | |
12: xi′=xibest−(xibest−xi′)×U(0,1) | |
13: else | %Group 2 |
14: xi′=xibest+(xiworst−xi′)×U(0,1) | |
15: end if | |
16: else | |
17: m=integer(1+(n−1))×U(0,1) | |
18: xmbest=xmbest×UBiUBm | |
19: xi′=xi′+(xmbest−xi′)×U(0,1) | |
20: end if | |
21: xi′=min(max(xi′,LBi),UBi) | |
22: end if | |
23: else | |
24: xi′=LBi+U(0,1)×(UBi−LBi) | |
25: end if | |
26: end for | |
27: iff(x′)<f(xworst)then | |
28: replacexworstinHMwithx′ | |
29: end if | |
30: end while |
Algorithm 6: Novel Self-Adaptive Harmony Search | |
1: SetHMS,HMCR using (10), and MaxImp | |
2:xij=LBi+U(0,1)×(UBi−LBi),∀i=1,2,...,nand∀j=1,2,...,HMS | |
3: Computef(xj),∀j=1,2,...,HMSand correspondingfstd | |
4: while(t≤MaxImp)do | |
5: for eachi∈[1,n]do | |
6: ifU(0,1)≤HMCRthen | |
7: xi′=xijwherej∼U(1,HMS) | |
8: else | |
9: if(fstd>0.0001)then | |
10: xi′=LBi+U(0,1)×(UBi−LBi) | |
11: else | |
12: xi′=xibest+U(0,1)×(xiworst−xibest) | |
13: end if | |
14: end if | |
15: if(fstd>0.0001)then | |
16: xi′=xi′+(UBi−LBi)/100×(1−t/MaxImp)×U(−1,1) | |
17: else | |
18: xi′=xi′+0.0001×U(−1,1) | |
19: end if | |
20: end for | |
21: iff(x′)<f(xworst)then | |
22: replacexworstinHMwithx′ | |
23: end if | |
24: end while |
2.6. Global Dynamic Harmony Search (GDHS)
Based on IHS [50], the Global Dynamic Harmony Search (GDHS) [58] further improves the improvisation step of HS with dynamic parameters, as well as dynamic upper and lower bounds of the decision variables. The iterative values ofHMCRandPARare made to be both decreasing and increasing in the search of global optima, as given by:
HMCRt=0.9+0.2×(t−1MaxImp−1)×(1−t−1MaxImp−1),
PARt=0.85+0.3×(t−1MaxImp−1)×(1−t−1MaxImp−1).
ForBW, the dynamic adjustment is adapted from IHS, but with a few modifications, as follows: -4.6cm0cm
BWden=20×|1+log10(UBi−LBi)|,BWmax=UBi−LBiBWden,BWmin=0.001×BWmax,
and, based on Equation (6) from IHS, the equation reduces to,
BWt=(0.001)tMaxImp.
Next, a correction coefficient,coef, is introduced at each iteration by:
coeft=(1+(HMS−j))×(1−t−1MaxImp−1),
where j is the index of the selected harmony in the memory consideration step.
Finally, the dynamic lower and upper bounds are obtained for the random selection step. Algorithm 7 provides the computational steps of GDHS.
Algorithm 7: Global Dynamic Harmony Search | |
1: SetHMS,HMCR using (11),PAR using (12),BW using (14), and MaxImp | |
2:xij=LBi+U(0,1)×(UBi−LBi),∀i=1,2,...,nand∀j=1,2,...,HMS | |
3: Computef(xj),∀j=1,2,...,HMS | |
4: while(t≤MaxImp)do | |
5: for eachi∈[1,n]do | |
6: ifU(0,1)≤HMCRt | |
7: xi′=xijwherej∼U(1,HMS) | |
8: ifU(0,1)≤PARt | |
9: computecoeft using (15) | |
10: xi′=xi′±BWt×coeft | |
11: if(xi′>UBiorxi′<LBi) | |
12: xi′=xi′∓BWt×coeft | |
13: end if | |
14: end if | |
15: else | |
16: UBiHM=xiworstandLBiHM=xibest | |
17: UBi′=UBiHM+BWmaxandLBi′=LBiHM−BWmax | |
18: xi′=LBi′+U(0,1)×(UBi′−LBi′) | |
19: end if | |
20: end for | |
21: iff(x′)<f(xworst)then | |
22: replacexworstinHMwithx′ | |
23: end if | |
24: end while |
2.7. Parameter Adaptive Harmony Search (PAHS)
The Parameter Adaptive Harmony Search (PAHS) [59] focused on the modification of the improvisation step of IHS [50]. The dynamic values ofHMCR,PAR, andBWare generated during each iteration to ensure the global optima is achieved. The authors explored four different combinations of dynamicHMCRandPARiterative values i.e., (i) linearHMCRandPAR; (ii) exponentialHMCRand linearPAR; (iii) linearHMCRand exponentialPAR; and, (iv) exponentialHMCRandPAR. Through computational experiments, it was concluded that linearHMCRand exponentialPARyields the best performance. In this way,HMCRgradually increases, whilePARexponentially decreases with respect to the iterations.HMCRandPARat each iteration are computed using:
HMCRt=HMCRmin+(HMCRmax−HMCRmin)×tMaxImp,
PARt=PARmax×(PARminPARmax)tMaxImp,
whereas,BWis adapted from IHS as it is. PAHS further aggravates the difficulty of finding suitable values as there are six parameters to be set now, rather than only four in IHS. PAHS is detailed in Algorithm 8.
Algorithm 8: Parameter Adaptive Harmony Search | |
1: SetHMS,HMCR using (16),PAR using (17),BW using (6), and MaxImp | |
2:xij=LBi+U(0,1)×(UBi−LBi),∀i=1,2,...,nand∀j=1,2,...,HMS | |
3: Computef(xj),∀j=1,2,...,HMS | |
4: while(t≤MaxImp)do | |
5: for eachi∈[1,n]do | |
6: ifU(0,1)≤HMCRtthen | |
7: xi′=xijwherej∼U(1,HMS) | |
8: ifU(0,1)≤PARtthen | |
9: xi′=xi′+(2×U(0,1)−1)×BWt | |
10: end if | |
11: else | |
12: xi′=LBi+U(0,1)×(UBi−LBi) | |
13: end if | |
14: end for | |
15: iff(x′)<f(xworst)then | |
16: replacexworstinHMwithx′ | |
17: end if | |
18: end while |
2.8. Enhanced Self-Adaptive Global Best Harmony Search (ESHS)
The Enhanced Self-Adaptive Global Best Harmony Search (ESHS) [60] is one of the recent HS variants that retains the simplicity and distinctive framework of SHS. In order to eliminate the troublesome parameter fine-tuning process in SHS, a new parameter setting-free strategy is proposed without requiring any extra statistic and external archive.HMCRis dynamically obtained at each iteration as a random normal number,
HMCRt=N(n1+n,11+n),
whilePARis given by:
PARt=1−t−1MaxImp,
andBWis defined by:
BWt=|xih−xi′|xih≠xi′,h∼U(1,HMS)(UBi−LBi)×e(LBi−UBi)tMaxImpotherwise.
ESHS employs the Gaussian mutation technique in contrary to the uniform randomization in SHS. Gaussian mutation is claimed to be more efficient in exploring the global optimum solution as compared to uniform randomization. Thus, the random selection in ESHS is performed while using:
xi′=N(μi,σ),μi=xibest,σ=1−(t−1MaxImp)2,
with the probability of1−HMCR. ESHS is detailed in Algorithm 9.
Algorithm 9: Enhanced Self-Adaptive Global Best Harmony Search | |
1: SetHMS,HMCR using (18),PAR using (19),BW using (20), and MaxImp | |
2:xij=LBi+U(0,1)×(UBi−LBi),∀i=1,2,...,nand∀j=1,2,...,HMS | |
3: Computef(xj),∀j=1,2,...,HMS | |
4: while(t≤MaxImp)do | |
5: for eachi∈[1,n]do | |
6: ifU(0,1)<HMCRtthen | |
7: xi′=xijwherej∼U(1,HMS) | |
8: ifU(0,1)<PARtthen | |
9: xi′=xi′+U(−1,1)×BWt | |
10: end if | |
11: else | |
12: perform Gaussian mutation using (21) | |
13: end if | |
14: if(xi′>UBiorxi′<LBi)then | |
15: xi′=LBi+U(0,1)×(UBi−LBi) | |
16: end if | |
17: end for | |
18: iff(x′)<f(xworst)then | |
19: replacexworstinHMwithx′ | |
20: end if | |
21: end while |
2.9. Improved Binary Global Harmony Search (IBGHS)
The Improved Binary Global Harmony Search (IBGHS) [61] is a binary variant of the NGHS [54] that aims to improve the two limitations of NGHS namely the local optima trap and slow convergence. The improvisation step is modified with the introduction of a control parameterPcin the place ofHMCRand a linear combination of the best and worst harmonies in order to improve the global search ability and convergence speed of NGHS. Algorithm 10 provides the computational procedure of IBGHS.
Algorithm 10: Improved Binary Global Harmony Search | |
1: SetHMS,Pc,Pm,PAR,BW, and MaxImp | |
2:xij=LBi+U(0,1)×(UBi−LBi),∀i=1,2,...,nand∀j=1,2,...,HMS | |
3: Computef(xj),∀j=1,2,...,HMS | |
4: while(t≤MaxImp)do | |
5: for eachi∈[1,n]do | |
6: if(U(0,1)≤Pc)then | %control |
7: xR=2×xibest−xiworst | |
8: xi′=min(max(xi′,LBi),UBi) | |
9: xi′=xiworst+U(0,1)×(xR−xiworst) | %position updating |
10: if(U(0,1)≤Pm)then | %genetic mutation |
11: xi′=LBi+U(0,1)×(UBi−LBi) | |
12: else | |
13: xi′=0.7xibest+0.3xiworst | |
14: if(U(0,1)≤PAR)then | |
15: xi′=xi′+U(0,1)×BW | |
16: xi′=min(max(xi′,LBi),UBi) | |
17: end if | |
18: end if | |
19: end for | |
20: iff(x′)<f(xworst)then | |
21: replacexworstinHMwithx′ | |
22: end if | |
23: end while |
Despite being successfully applied in various fields, to the best of our knowledge, HS algorithms have yet to be applied in epidemiology, particularly in epidemiological modeling. Thus, in this study, ten variants of HS algorithm is proposed to be applied in order to estimate the epidemiological parameters of interest in the prototypical compartmental epidemiological SIR model and compare the estimation performance of each algorithm. 3. Estimating the Epidemiological Parameters of SIR Model
HS algorithms are applied to the SIR parameter estimation problem while using the cumulative infectious cases (total cases) of the COVID-19 pandemic as the use-case. The optimized (final) values of parametersβandγare estimated by calibrating the available COVID-19 data and the SIR model with the generated harmonies (candidate solutions) from HM.
3.1. COVID-19 Data Sets
The authors obtain the time series of cumulative infectious cases per day for five countries, namely the United States of America (USA), France (FR), South Korea (SK), Ireland (IR), and Singapore (SG), by web scraping the figures in https://www.worldometers.info/coronavirus (which gathers data from various reliable sources including European Centre for Disease Prevention & Control and Johns Hopkins University & Medicine Coronavirus Resource Center). The data collected are for a period of 240 days, beginning from the first day of the outbreak in each country. Next, the first 220 days data is used for calibration to obtain the estimates of the parameters. The remaining 20 days of data are used for validation against the projection of simulation produced while using the HS optimized parameter values.
3.2. SIR Model Setup
The SIR compartments are initialized (at timet=0) with initial conditionsI0andR0, according to the actual number of infectious and removed cases, respectively, on the first day of the outbreak in each country. Meanwhile,S0is set as the remaining individuals in the population N who are yet to be infected and removed, i.e.,S0=N−I0−R0 . Table 1 provides the modeling initialization.
3.3. SIR Parameters Estimation as Optimization Problem
The estimation of SIR parameters is formulated as an optimization problem with decision variablesx={x1,x2}, whereβ=x1andγ=x2. The upper and lower bounds of the decision variables are set according to the complete possible ranges ofβandγ, which are essentially the epidemiological dynamics’ rates and, hence, they share the same bounds, i.e.,β,γ∈(0,1). The objective function is formulated, as follows:
-
LetCT=∑t=0t=T Itbe the observed cumulative infectious cases on dayt=T,(t,T∈[0,219]), whereItis the observed infectious cases on day t.
-
LetC^T=∑t=0t=T I^tbe the simulated cumulative infectious cases (rounded to the corresponding integers) on dayt=T,(t,T∈[0,219]), whereI^tis the SIR model simulated infectious cases on (rounded to the corresponding integers) day t while usingx∈HM.
-
Subsequently, an objective functionf(·)that minimizes the SSE betweenCTandC^Tcan be formulated as:
minimizef(x)=∑T=0T=219(CT−C^T)2s.tx∈(0,1).
A set of ten independent runs is performed for each HS algorithm while using each of the data sets. The optimization steps are detailed, as follows:
-
The common control parameters shared among all ten algorithms are set to be identical,HMS=30andMaxImp=50,000 , following the recommendation in [47].
-
Ten initial HMs(HM1,HM2,⋯,HM10)and ten corresponding initial fitness vectors(f1,f2,⋯,f10) are generated to be used in each run according to the initialization of HM (Step 2) in Section 2.
-
Optimization is performed for ten independent runs for each data set while using the identicalHM and f using each of the algorithms (Algorithm 1 to Algorithm 10), as described in Section 2. For instance, the first run for the USA data set will be performed while using the sameHM1andf1 for all ten HS algorithms. The specific control parameters that are shared among some algorithms are also set to be identical as displayed in Table 2.
- Repeat Step 3 until the runs are completed for all five data sets. The combination of parameters’ values that yields the best fitness (lowest SSE) are designated as the optimized parameters.
-
The average values of{x1,x2}={x¯1,x¯2}from each run for each HS algorithm are obtained, where,
x¯i=∑k=1k=10 xik10,(i=1,2),(k=1,2,…,10).
Subsequently,{x¯1,x¯2}are designated as the overall optimized SIR parameters with respect to each algorithm, thusxoptimized={x¯1,x¯2}. Note that the identical common and specific control parameters, as well as the identical HM and f, are used in this study to ensure a fair comparison among the HS algorithms.
3.4. Evaluation of HS Estimation and Performance Comparison
The accuracy of the SIR parameter estimates is deduced from the fitness values (SSE), where the algorithm with lowest fitness is designated as the best performing estimator and vice-versa. The optimized epidemiological parametersxoptimizedare supplied to the SIR model once again to produce a projection of simulation for a period of 20 subsequent days from the end date of calibration period in order to evaluate the predictive capability of SIR model while using the parameters that were estimated from HS algorithms. LetCTbe the observed cumulative infectious cases on the projection period andC^Tbe the projected SIR model simulated cumulative infectious cases usingxoptimized, then the accuracy of estimation is evaluated by computing the Root Mean Squared Error (RMSE) betweenCTandC^Tin the period of projection using:
RMSE=∑T=220T=239 (CT−C^T)220.
The accuracy of the SIR model’s projected simulation is decided based on the RMSE value, where a lower RMSE value indicates a better prediction and vice-versa. Eventually, the predictive capability indicates the parameter estimation accuracy of the HS variants. The corresponding RMSE for each algorithm within the same data set will be statistically compared. The comparison is performed while using the Friedman test to see whether there is an overall significant difference in the estimations produced by each of the algorithms. Furthermore, if the performance of SHS is found to be comparable with the rest of the HS variants, then the post-hoc procedure of Wilcoxon signed-rank tests is conductedin order tto determine whether there is any statistically significant difference between the estimation performance of SHS and the rest of the HS variants individually. 4. SIR Simulation Experiments and Discussion In this section, SIR simulation experiments are performed to illustrate the ability of HS algorithms as an efficacious estimator of SIR parameters. All of the variants of HS algorithm were coded in MATLAB R2017b on a laptop computer with 2.50 GHz Intel i7–4710HQ CPU with 32 GB of RAM. The discussion is discretely presented for each data set used in this study. 4.1. Simulations and Projected Simulations
4.1.1. USA Data Set (United States of America)
Table 3 presents the optimized epidemiological parameters and corresponding fitness values (average from ten independent runs) for the USA data set. Figure 2 displays the visualization of the 220 days simulation together with the 20 days projected simulation using the optimized parameters. Visual inspection for USA data set is not informative enough as some of the lines representing the algorithms overlapped, which suggested that the estimates are very close to each other, except for NGHS, IBGHS, SHS, ITHS, ESHS, PAHS, and IHS that are visible. Simulations of each algorithms for approximately the first 100 days were indeed close to each other as well as to the observed cumulative cases. Beginning from the hundredth day of the outbreak, the simulations of each algorithm showed differences while they started deviating from the observed values, except for PAHS and IHS, which are still intact with the observed values. The projected simulation for subsequent 20 days were consistent in terms of the pattern in the calibration period. It is observed from the fitness values (SSE) in Table 3 that IHS appears to be the best performing estimator for USA data set, while NGHS is the least performing estimator.
For the USA data set, it is observed that the simulation from each HS algorithms are not very far off the observed cumulative cases, even in the projection period. Most of the simulations are similar as far as the parameters’ values are concerned and they were able to approximately resemble the observed trend. Simulations of NGHS and IBGHS are the farthest deviation from the actual values. The similar behavior between these two algorithms may be due to the use of genetic mutation in the improvisation step that sets them apart from the rest of the algorithms.
4.1.2. FR Data Set (France)
Table 4 provides the optimized epidemiological parameters and the corresponding fitness values (the average from 10 independent runs) for the FR data set. The visualization of the 220 days simulation together with the 20 days projected simulation using the optimized parameters are depicted in Figure 3. The simulations can be visually inspected as the lines representing the algorithms do not overlap each other except for SHS, which overlaps with GDHS. During the calibration period the simulation of IHS was similar but not so close to ESHS and PAHS. However, in the projection period, IHS’s simulation seems to be closer to ESHS and PAHS. The other groups of algorithms that produced similar simulations are ITHS, NSHS, SGHS and SHS, GDHS, IBGHS. The simulations started diverging from each after day 60 of the outbreak, and gradually digressed from the observed cumulative values all the way up to the projection period. The projected simulation for subsequent 20 days depicted a further deviance of the simulations from the observed values. It is observed from the fitness values (SSE) in Table 4, that IBGHS appears to best performing estimator for FR data set, while NGHS is the least performing estimator, just as the case of USA data set.
For the FR data set, observe that the simulations are slightly far off the observed cumulative cases. The simulations were not able to accurately mimic the observed trend. NGHS’s simulation was apart all the way, whereas we can infer that the simulations of IHS, ESHS, and PAHS are similar; ITHS, NSHS, and SGHS are alike; and finally, SGHS, GDHS, SHS, and IBGHS are close to each other. NGHS’s simulation is again the farthest and different from the rest, which is probably due to the use of genetic mutation, which sets it apart from the rest. Genetic mutation is also used in IBGHS, but the standardPARsetting in IBGHS could have contributed to the high similarity in behavior as SHS than NGHS. Hence, for this particular data set, thePARparameter was more influential then genetic mutation, as compared to the USA data set.
4.1.3. SK Data Set (South Korea)
Table 5 presents the optimized epidemiological parameters and the corresponding fitness values (average from 10 independent runs) for the SK data set. Figure 4 displays the visualization of the 220 days simulation, together with the 20 days projected simulation using the optimized parameters. The simulations can be well visualized as the lines representing each algorithm is distinct, except for SHS, which overlaps with ESHS. The simulations started departing from the observed cumulative cases as early as the fiftieth day of the outbreak. The band of simulations also started deviating around the same time and was gradually separated up to the projection period. All of the simulations were distinguishable, except in the case of IHS and NSHS, which were close to each other and SHS and ESHS that were similar. The projected simulation for subsequent 20 days indicated a larger deviation of the simulations from the observed values. We observe from the fitness values (SSE) in Table 5 that ITHS appears to be the best performing estimator for SK data set, while NGHS is the least performing estimator, just as the case of USA and FR data sets.
For the SK data set, observe that the simulations are quite far off the observed cumulative cases, with NGHS being the farthest and ITHS being the nearest. Yet, ITHS’s simulation is still far off the observed cumulative cases comparatively. The simulations were not able to resemble the observed trend well. The simulation for SK data set is clearly not as good as USA or FR data sets. the simulations of all the algorithms were distinct except for the overlapping SHS and ESHS. NGHS’s simulation is again very different from the rest, probably due to use of genetic mutation, which sets it apart from the rest. As far as this data set is concerned, it is also interesting to observe ESHS that requires zero parameter setting and uses Gaussian mutation in the place of random generation of harmony, behaves somewhat similar to SHS.
4.1.4. IR Data Set (Ireland)
Table 6 provides the optimized epidemiological parameters and the corresponding fitness values (average from 10 independent runs) for the IR data set. Figure 5 depicts the visualization of the 220 days simulation, together with the 20 days projected simulation using the optimized parameters. The visual inspection of the simulations is informative, although the lines representing most of the algorithms are quite close to each other, with only SGHS and PAHS overlapping. The simulations drifted away from the observed cumulative cases as early as before the fiftieth day of the outbreak itself. However, the simulations of SGHS, PAHS, and NSHS managed to stay intact with the observed values, even during the projection period. On the other hand, the simulations of IHS, IBGHS, and SHS were still close to the observed values and they became closer in the projection period. ITHS remained steadily far from the observed values, while ESHS, NGHS, and GDHS maintained a constant separation from the observed values. The projected simulation for subsequent 20 days indicated a smaller deviation of the simulations from the observed values, except for ITHS. It is observed from the fitness values (SSE) in Table 6 that NSHS appears to be the best performing estimator for IR data set, while ITHS emerged as the least performing estimator.
For the IR data set, it is observed that the simulations are reasonably close to the observed cumulative cases, except for ITHS. This shows that most of the simulations managed to mimic the observed trend well. Excluding ITHS, we can group the algorithms with similar simulations as (i) ESHS, NGHS, and GDHS; (ii) IHS, IBGHS, and SHS; and, (iii) SGHS, PAHS, and NSHS. Note that the difference in the simulation of ITHS was better in the previous three data sets, but it appeared to be the worst in this data set. The combination of algorithms that produced similar simulations is also different from combinations in previous data set, notably the FR data set.
4.1.5. SG Data Set (Singapore)
Table 7 provides the optimized epidemiological parameters and the corresponding fitness values (average from 10 independent runs) for the SG data set. Figure 6 displays the visualization of the 220 days simulation, together with the 20 days projected simulation using the optimized parameters. The simulations are distinguishable, thus the visualization is informative, except for a slight overlap between ITHS and IBGHS. The simulations band begin to diverge from the observed cumulative cases approximately around the hundredth day of the outbreak, except for the simulations of ESHS, PAHS, and IHS, which diverged gradually. While other simulations deviated from the rest, ESHS, PAHS, and IHS were close to each other until day 200 of the outbreak. Only after that, the simulations diverged from each other in a small amount up to the projection period. All of the simulations were far from the observed values, resembling a similarity with the case of simulations in the SK data set. The projected simulation for subsequent 20 days indicated an even larger deviation of the simulations from the observed values. It is observed from the fitness values (SSE) presented in Table 7 that IHS appears to best performing estimator for SG data set, while ITHS is the least performing estimator, similar to IR data set.
For the SG data set, it is observed that the simulations are quite far off the actual cumulative cases, with ITHS being the farthest and IHS being the nearest. Nevertheless, IHS’s simulation is still far off the observed cumulative cases. The simulations were not able to resemble the observed trend well and it is undoubtedly not as good as for USA, FR, and IR data sets. The quality of simulations are similar to the SK data set. The simulations of ITHS and IBGHS are really far off the observed values. We note that the simulations of ITHS and IBGHS are similar, NSHS and SGHS are close to each other, GDHS and NGHS are approximately close, ESHS and PAHS are close to each other, while SHS and IHS were not similar or close to the rest. 4.2. Performance Comparison
Following the SIR simulation experiments that were performed on the five data sets, it is noted that the performance of each algorithm (based on the fitness values (SSE)) varies across the data sets. Based on the parameter estimates for each data set presented in Table 3, Table 4, Table 5, Table 6 and Table 7, the estimates produced by each algorithm are fairly similar, which indicates that the optimization performed is consistent due to the underlying nature of HS, regardless of the type of HS variant. Table 8 lists the best performing algorithm for each data set. Apparently, there is no one clear winner algorithm for this particular application of HS.
We obtain the RMSE values from the 20 days projected SIR simulation while using Equation (24) in order to statistically compare the performance of each algorithm within each of the data sets. The Friedman test is conducted to determine whether there is any statistically significant differences among the estimation performances (in terms of RMSE) of each algorithm. Table 9 displays the RMSE values.
The low RMSE values within each data set indicates that the predictive capability of SIR model while using the HS optimized parameters are fairly good. Eventually, it attests that the parameter estimation accuracy of HS variants is satisfactory. The Friedman test elicited no statistically significant difference in the estimation performance of the HS algorithms at significance level of 0.05 (χ2(9)=11.749,p=0.228 ). It is noteworthy to observe that the estimation performance of SHS is comparable with the rest of the HS variants and fairly consistent across the data sets. The simulations of SHS are also reasonably close to the observed cumulative cases in FR, SK, IR, SG, and USA data sets (in the closest order). The post-hoc analysis for SHS is performed using the Wilcoxon signed-rank tests in order to identify whether there is any statistically significant difference between the estimation performance of SHS and the rest of the HS variants individually. Table 10 displays the test results.
The Wilcoxon signed-rank tests that were conducted with a Bonferroni correction applied at the resulting significance level of0.059=0.006elicited no statistically significant difference in the estimation performance of SHS when compared to the rest of the HS variants individually. Indeed, the insignificant difference supports that the performance of SHS is comparable. Therefore, although SHS may not be the best performing algorithm for any of the data sets, the consistency and statistical tests’ results elucidate that SHS is competent enough to be a potential efficacious estimator for the epidemiological parameters of SIR model. A slight manual fine-tuning of the control parameters may do the job of increasing the estimation accuracy of SHS. In essence, the primary advantages of applying HS to estimate the epidemiological parameters of compartmental models are as follows:
- No initial values for the epidemiological parameters are required. One does not need to adapt the values of epidemiological parameters from previous studies, so as to alleviate any bias in the estimation process.
- No specified upper and lower bounds for the epidemiological parameters are required to suit the data sets. The burdensome process of finding an appropriate range of the parameters for each data set are evaded by using the complete range of the parameters, regardless of the data set. This may increase the computational time, but it can be traded off with better computing resources.
- No in-depth information about the infectious disease is necessary. HS optimization can well be applied to other infectious disease modeling without extra specific information about the disease.
5. Conclusions and Future Work The application of HS is a novel approach in the field of epidemiology, particularly in epidemiological modeling. In this study, HS was implemented in order to estimate the epidemiological parameters of the prototypical compartmental SIR model as an optimization problem. Ten variants of HS algorithm were applied on five data sets to simulate the trajectory of COVID-19 cumulative infectious cases. The computational experiments demonstrated the ability of HS to be successfully applied to epidemiological modeling and as an efficacious estimator for the model parameters. As such, HS is proposed as a potential alternative estimation tool for the epidemiological parameters. An interesting insight from this study is that SHS is competent enough and it exhibited comparable performance with the rest of the HS variants in this particular application of HS optimization. For future work, the application of HS can be expanded to parameter estimations in advanced compartmental epidemiological models (e.g.,: SEIR model) and to the modeling of other existing infectious diseases (e.g.,: H1N1) or potential novel infectious diseases in the future.
Data Set | Calibration Period | Projection Period | I0 | R0 | N * |
---|---|---|---|---|---|
(Country) | (220 Days) | (20 Days) | |||
USA | 21 January 2020–27 August 2020 | 28 August 2020–16 September 2020 | 1 | 0 | 331,002,651 |
FR | 25 January 2020–31 August 2020 | 1 September 2020–20 September 2020 | 3 | 0 | 65,273,511 |
SK | 20 January 2020–26 August 2020 | 27 August 2020–15 September 2020 | 1 | 0 | 51,269,185 |
IR | 29 February 2020–5 October 2020 | 6 October 2020–25 October 2020 | 1 | 0 | 4,937,786 |
SG | 24 January 2020–30 August 2020 | 31 August 2020–19 September 2020 | 3 | 0 | 5,850,342 |
* estimated at mid year according to UN data.
Algorithm | HMCR | PAR | PAR | HMCRmin | HMCRmax | PARmin | PARmax | BWmin | BWmax | Pm | Pc | LP | HMCRm | PARm |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SHS | 0.95 | 0.3 | 0.01 | - | - | - | - | - | - | - | - | - | - | - |
IHS | 0.95 | - | - | - | - | 0.99 | 0.01 | 0.001 | 1/(20x(UB-LB)) | - | - | - | - | - |
NGHS | - | - | - | - | - | - | - | - | - | 0.005 | - | - | - | - |
SGHS | - | - | - | - | - | - | - | 0.001 | 1/(20x(UB-LB)) | - | - | 100 | 0.98 | 0.9 |
ITHS | 0.95 | - | - | - | - | 0.99 | 0.01 | - | - | - | - | - | - | - |
NSHS | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
GDHS | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
PAHS | - | - | - | 0.99 | 0.7 | 0.99 | 0.01 | 0.001 | 1/(20x(UB-LB)) | - | - | - | - | - |
ESHS | - | - | - | - | - | - | - | - | - | - | - | - | - | - |
IBGHS | - | 0.3 | 0.01 | - | - | - | - | - | - | 0.005 | 0.9 | - | - | - |
Algorithm | x1(β) | x2(γ) | f(x) (SSE) |
---|---|---|---|
SHS | 0.4208 | 0.0931 | 1098.33 |
IHS | 0.4198 | 0.0938 | 1072.83 |
NGHS | 0.4212 | 0.0924 | 1174.67 |
SGHS | 0.4202 | 0.0934 | 1076.62 |
ITHS | 0.4203 | 0.0933 | 1082.46 |
NSHS | 0.4202 | 0.0934 | 1077.16 |
GDHS | 0.4202 | 0.0934 | 1077.14 |
PAHS | 0.4198 | 0.0938 | 1073.06 |
ESHS | 0.4202 | 0.0934 | 1077.22 |
IBGHS | 0.4210 | 0.0921 | 1132.08 |
Algorithm | x1(β) | x2(γ) | f(x) (SSE) |
---|---|---|---|
SHS | 0.2208 | 0.1227 | 1002.11 |
IHS | 0.2221 | 0.1223 | 1211.77 |
NGHS | 0.2224 | 0.1220 | 1427.31 |
SGHS | 0.2213 | 0.1225 | 1101.25 |
ITHS | 0.2214 | 0.1226 | 1102.71 |
NSHS | 0.2213 | 0.1226 | 1102.42 |
GDHS | 0.2208 | 0.1228 | 992.04 |
PAHS | 0.2218 | 0.1226 | 1125.00 |
ESHS | 0.2218 | 0.1226 | 1125.36 |
IBGHS | 0.2207 | 0.1228 | 987.31 |
Algorithm | x1(β) | x2(γ) | f(x) (SSE) |
---|---|---|---|
SHS | 0.3125 | 0.1012 | 2382.52 |
IHS | 0.3137 | 0.1011 | 2510.07 |
NGHS | 0.3142 | 0.1014 | 2531.25 |
SGHS | 0.3120 | 0.1013 | 2203.09 |
ITHS | 0.3140 | 0.1012 | 2172.45 |
NSHS | 0.3137 | 0.1011 | 2508.21 |
GDHS | 0.3130 | 0.1012 | 2500.66 |
PAHS | 0.3140 | 0.1014 | 2528.17 |
ESHS | 0.3125 | 0.1011 | 2381.76 |
IBGHS | 0.3135 | 0.1012 | 2504.14 |
Algorithm | x1(β) | x2(γ) | f(x) (SSE) |
---|---|---|---|
SHS | 0.3426 | 0.1044 | 970.02 |
IHS | 0.3427 | 0.1044 | 1078.34 |
NGHS | 0.3436 | 0.1042 | 1233.88 |
SGHS | 0.3425 | 0.1043 | 703.22 |
ITHS | 0.3441 | 0.1042 | 1827.38 |
NSHS | 0.3420 | 0.1043 | 662.22 |
GDHS | 0.3432 | 0.1043 | 1202.47 |
PAHS | 0.3425 | 0.1043 | 701.58 |
ESHS | 0.3436 | 0.1041 | 1247.19 |
IBGHS | 0.3425 | 0.1044 | 971.54 |
Algorithm | x1(β) | x2(γ) | f(x) (SSE) |
---|---|---|---|
SHS | 0.3604 | 0.1331 | 2015.77 |
IHS | 0.3582 | 0.1334 | 1002.64 |
NGHS | 0.3600 | 0.1332 | 1799.32 |
SGHS | 0.3607 | 0.1331 | 2238.65 |
ITHS | 0.3612 | 0.1328 | 2564.23 |
NSHS | 0.3608 | 0.1331 | 2241.29 |
GDHS | 0.3600 | 0.1333 | 1805.79 |
PAHS | 0.3585 | 0.1333 | 1542.52 |
ESHS | 0.3588 | 0.1332 | 1622.38 |
IBGHS | 0.3611 | 0.1328 | 2557.87 |
Data Set | Best Algorithm |
---|---|
USA | IHS |
FR | IBGHS |
SK | ITHS |
IR | NSHS |
SG | IHS |
Algorithm | USA | FR | SK | IR | SG |
---|---|---|---|---|---|
SHS | 25.39 | 12.86 | 29.73 | 16.66 | 28.44 |
IHS | 12.95 | 18.73 | 34.12 | 18.55 | 17.23 |
NGHS | 38.12 | 27.32 | 39.64 | 25.44 | 24.22 |
SGHS | 13.85 | 13.63 | 26.24 | 13.45 | 30.66 |
ITHS | 15.34 | 14.75 | 25.44 | 37.89 | 39.14 |
NSHS | 14.76 | 14.28 | 34.08 | 10.98 | 31.28 |
GDHS | 14.22 | 12.19 | 33.64 | 22.62 | 25.22 |
PAHS | 13.02 | 16.38 | 30.08 | 13.03 | 19.86 |
ESHS | 14.92 | 16.59 | 38.7 | 26.12 | 21.46 |
IBGHS | 33.69 | 11.35 | 33.78 | 16.94 | 38.37 |
Test | Z | p-Value |
---|---|---|
SHS–IHS | −0.405 | 0.686 |
SHS–NGHS | −1.753 | 0.080 |
SHS–SGHS | −1.214 | 0.225 |
SHS–ITHS | −0.674 | 0.500 |
SHS–NSHS | −0.405 | 0.686 |
SHS–GDHS | −0.135 | 0.893 |
SHS–PAHS | −1.214 | 0.225 |
SHS–ESHS | −0.135 | 0.893 |
SHS–IBGHS | −1.483 | 0.138 |
Author Contributions
Conceptualization, K.G. and L.S.L.; methodology, K.G. and L.S.L.; software, K.G.; validation, K.G., L.S.L. and H.-V.S.; formal analysis, K.G., L.S.L. and H.-V.S.; investigation, K.G. and L.S.L.; writing-original draft preparation, K.G. and L.S.L.; writing-review and editing, K.G., L.S.L. and H.-V.S.; supervision, L.S.L. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Publicly available datasets were analyzed in this study. This data can be found here: https://www.worldometers.info/coronavirus.
Conflicts of Interest
The authors declare no conflict of interest.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2021. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Epidemiological models play a vital role in understanding the spread and severity of a pandemic of infectious disease, such as the COVID-19 global pandemic. The mathematical modeling of infectious diseases in the form of compartmental models are often employed in studying the probable outbreak growth. Such models heavily rely on a good estimation of the epidemiological parameters for simulating the outbreak trajectory. In this paper, the parameter estimation is formulated as an optimization problem and a metaheuristic algorithm is applied, namely Harmony Search (HS), in order to obtain the optimized epidemiological parameters. The application of HS in epidemiological modeling is demonstrated by implementing ten variants of HS algorithm on five COVID-19 data sets that were calibrated with the prototypical Susceptible-Infectious-Removed (SIR) compartmental model. Computational experiments indicated the ability of HS to be successfully applied to epidemiological modeling and as an efficacious estimator for the model parameters. In essence, HS is proposed as a potential alternative estimation tool for parameters of interest in compartmental epidemiological models.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer