The beginnings of the trunking theory go back to the beginning of the twentieth century, when Danish mathematician Agner Krarup Erlang establish its fundamentals, by studying how many users can be located in a telephone network in a local cell area, by a limited number of operators working to establish connections.1 The unit of traffic intensity was named after him—Erlang. The Erlang theory refers to trunking systems (Erlang B formula) and queuing systems (Erlang C formula). In a trunked system which offers no queuing for call requests, a user who requests a service is given immediate access to a channel if one is available, and while all of the channels are already in use, the requesting user is blocked without access, or denied access to the system and is free to send a request later to establish a connection again. Opposite, in queuing systems a queue may be used to hold the requesting users until a channel becomes available. The Erlang B formula determines the probability of blocking for a given offered traffic in the system with the given number of channels, while the Erlang C formula determines the probability that a call request will be on hold for the considered value of the offered traffic.2 The Erlang's formulas, aimed at the beginning to be used for performance calculation in public telephone networks, over the years have been applied in systems where the resources are limited, such as in mobile communication systems, VoIP (Voice over Internet Protocol) systems, computer networks, call centers, and electric vehicle power systems.3–12
During planning and design of trunking and queuing systems and evaluating their performance, in addition to determining the probability of blocking and the probability of call waiting, which are the basis for estimating the quality of the service, it is also necessary to solve the inverse problem, that is, to determine the traffic intensity or the number of required channels (resources) for a given probability that corresponds to the desired GOS (Grade of Service) value. Given the nature of Erlang formulas, there is no inverse function of these formulas in a closed form. For this reason, research is carried out to find solutions (numerical models or approximations) for determining the inverse value of Erlang formulas13–17 or look-up tables are used, especially in applications related to call centers. Moreover, even the calculation of the Erlang functions for larger amounts of traffic and/or large channel number become computationally extensive and certain solutions of less computationally demanding approximations of Erlang formulas have been derived.14,18–20
The Erlang formulas look-up tables refer only to tabulated values of the input variables, whereas the approximations may not be accurate enough. Artificial neural networks can provide a fast and accurate solution for calculating the Erlang formulas and their inverses. Thanks to the ability to learn dependencies between two sets of data, artificial neural networks have found numerous applications in various fields such are the fields of electronics and communications.21–32 They are suitable to be applied for model development in cases when there are no mathematical expressions to describe the considered input–output relationship or to replace computationally extensive models while retaining their accuracy. As ANNs are characterized by a quick response, the neural models are fast and efficient. In this article, it is proposed to use artificial neural networks to develop the Erlang B formula neural model and neural model for the Erlang B formula inverses.
The article is structured as follows. In Section 2, a description of the Erlang B formula is given and calculation issues are discussed. Then, in Section 3, the proposed models based on artificial neural networks are described. Section 4 contains the obtained numerical results and the corresponding discussion. Section 5 presents the most important conclusions.
ERLANG B FORMULAErlang B formula determines the probability that a call will be blocked in a call blocking system and this probability determines the GOS in the system. The Erlang B formula has the following form: [Image Omitted. See PDF] where S represents the number of channels in the system, and a the traffic intensity. It assumes that calls arrive as determined by the Poisson distribution. Furthermore, it is assumed that there are an infinite number of users as well as following: (a) all users, including blocked users, may request a call at any time; (b) the probability of a user occupying a channel is exponentially distributed (means longer calls are less likely to occur), and (c) there are a finite number of channels available in the trunking pool. This is known as an M/M/m/m queue, and leads to the derivation of the Erlang B formula (also known as the blocked calls cleared formula). Although it is possible for a system with a finite number of users to apply Engset's formula, which is more complicated than the Erlang B formula, this additional complexity does not significantly affect the accuracy of the results in the case that the number of users is one order of magnitude greater than the number of channels in the system. Also, the results for a finite number of users give a lower blocking probability than the Erlang B formula.2
Figure 1 shows the probability of blocking versus traffic intensity, for a system of up to 100 channels calculated by Erlang B formula for the traffic intensity up to 100 Erl. Often, the values obtained by the Erlang B formula are tabulated (giving the traffic intensity a that corresponds to the tabulated values of the number of channels S and the probability of blocking P, expressed in %), and an example of a part of the Erlang B table given in33 is shown in Table 1.
TABLE 1 A part of the Erlang B table given in.33
S/P(%) | 0.01 | 0.02 | 0.03 | 0.05 | 0.1 | 0.5 | 1.0 | 2 | 5 | 10 |
1 | 0.0001 | 0.0002 | 0.0003 | 0.0005 | 0.0010 | 0.0050 | 0.0101 | 0.0204 | 0.0526 | 0.1111 |
2 | 0.0142 | 0.0202 | 0.0248 | 0.0321 | 0.0458 | 0.1054 | 0.1526 | 0.2235 | 0.3813 | 0.5954 |
3 | 0.0868 | 0.110 | 0.127 | 0.1517 | 0.1938 | 0.3490 | 0.4555 | 0.6022 | 0.8994 | 1.271 |
4 | 0.2347 | 0.282 | 0.315 | 0.3624 | 0.4393 | 0.7012 | 0.8694 | 1.092 | 1.525 | 2.045 |
5 | 0.4520 | 0.527 | 0.577 | 0.6486 | 0.7621 | 1.132 | 1.361 | 1.657 | 2.219 | 2.881 |
6 | 0.7282 | 0.832 | 0.900 | 0.9957 | 1.146 | 1.622 | 1.909 | 2.276 | 2.960 | 3.758 |
7 | 1.054 | 1.19 | 1.27 | 1.392 | 1.579 | 2.158 | 2.501 | 2.935 | 3.738 | 4.666 |
8 | 1.422 | 1.58 | 1.69 | 1.830 | 2.051 | 2.730 | 3.128 | 3.627 | 4.543 | 5.597 |
9 | 1.826 | 2.01 | 2.13 | 2.302 | 2.558 | 3.333 | 3.783 | 4.345 | 5.370 | 6.546 |
10 | 2.260 | 2.47 | 2.61 | 2.803 | 3.092 | 3.961 | 4.461 | 5.084 | 6.216 | 7.511 |
Calculations of the blocking probability values by straightforward application of the Erlang B formula may face issues for a larger number of channels and a larger amount of traffic.18 Namely, the calculations in the frequently used program packages (like MATLAB software environment34) are performed in double-precision arithmetic conforming to the IEEE standard 754.35 The largest finite floating-point number in IEEE double precision is , which is . Therefore, for the numbers exceeding these values, software packages yield non intuitive results. As commented in,18 if the traffic intensity is a = 100 Erl, then aS overflows +∞ for S greater than 154. In the case of Erlang B formula calculation in MATLAB, if the largest finite floating point number is exceeded, the resulting probability becomes NaN (not a number). For illustration, in Figure 2A blue dots represent the combinations of the traffic intensity and the number of channels where the blocking probability calculated in MATLAB have a finite value (traffic intensity up to 500 Erl, number of channels up to 250). In the rest of the considered S-a space it is not possible to apply straightforwardly the Erlang B formula. Figure 2B shows the blocking probability calculated in the considered input space.
In this work, models based on the artificial neural networks are proposed for calculating Erlang B blocking probability (Figure 3), as well as for finding the inverse values of the Erlang B formula (Figure 4). All models are based on the multilayered perceptron networks (MLP). As in all three cases there are two input variables and one output variable, each ANN has two neurons in the input layer (IL), corresponding to the input variables of the model, and one neuron in the output layer (OL) corresponding to the model output variable. The ANNs with two hidden layers (HL1 and HL2) are used as the two-hidden-layers ANNs have better generalization abilities then one-hidden-layer ANNs. The number of neurons in the hidden layers is determined by trial-and-error approach during the training of the ANNs. The input layer neuron activation function is the unitary function, i.e. this layer plays a buffer role. The hidden layer neurons have the sigmoid activation function (log sigmoid or tangent hyperbolic) and the output layer neurons have the linear activation function. By analyzing the data behavior and experiments, it has been concluded that better results are achieved if the blocking probability is used in its logarithmic representation, that is, log P instead of P, while the other two variables S and a are used in the original linear representations.
The Erlang B function neural model (Figure 3) relates the blocking probability P with the number of channels S and the traffic intensity a. The inverse Erlang B neural models are developed to estimate the traffic intensity for the given blocking probability in the system with S channels (Figure 4A) or to determine the needed number of channels S to ensure the blocking probability equal or less then the given value P in the case of the considered amount of the traffic intensity a (Figure 4B).
If the ANN model is described by the following general expression , where x is the vector of input variables and y is the vector of the output variables, then the vectors x and y for particular models are as given in Table 2.
TABLE 2 Vectors of the ANN input and output variables.
Model | Vector of the ANN input variables | Vector of the ANN output variable |
Erlang B model—blocking probability prediction | ||
Inverse Erlang B model—traffic intensity prediction | ||
Inverse Erlang B model—number of channels prediction |
In the case of the Erlang B neural model, the output of the ANN should be converted to the linear representation to have the final output of the model. In the case of the inverse Erlang B neural models the blocking probability should be expressed in logarithmic representation before calculating the ANN responses.
The ANNs are trained using the blocking probability values calculated for several combinations of the traffic intensity and the number of channels by applying the Erlang B formula directly or taken from the previously tabulated Erlang B tables. After the ANNs are trained they give correct responses, not only for the values that were used for training, but also for other values from the range of the considered input–output space. As the transfer function of an artificial neural network can be expressed using mathematical expressions describing the network, which consist of basic mathematical operations and an exponential function, the developed models can be implemented in any software environment and used to directly determine the inverses of Erlang B formulas, without using any additional program structures, recursive formulas or approximations.
Therefore, once the ANN models have been developed, to calculate the values of Erlang B formula or the values of its inverses one should simply find the response of the corresponding ANN in the environment when the ANNs are trained or to calculate the outputs of the sets of the mathematical expressions, describing the ANN, implemented in a desired software environment.
NUMERICAL RESULTSThe proposed three neural models have been developed for systems having up to 250 channels and for the error probabilities up to 50%. The preliminary results for Erlang B inverse models for systems with maximum 100 channels have been reported in.36 The training data for the proposed models have been taken from the Erlang table given in.33 The probabilities which the training data (Training set) refer to are: 0.01%, 0.02%, 0.03%, 0.05%, 0.1%, 0.2%, 0.3%, 0.4%, 0.5%, 0.6%, 0.7%, 0.8%, 0.9%, 1%, 1.2%, 1.5%, 2%, 3%, 5%, 10%, 20%, 30%, 40%, and 50%. For each probability there are 175 combinations of the numbers of channels and the corresponding traffic intensity values, making in total 4550 samples in the training set. The test dataset (Test set 1) consists of the 427 data referring to the probabilities which are not included in the training set: 0.75%, 3.5%, 7.5%, 12.5%, 17.5%, 25%, and 35%, with 46 combinations of the numbers of channels and the corresponding traffic intensity values. As the test probability values are not tabulated in the table given in,33 the test data have been calculated using an online Erlang B calculator.37 The website does not report about the method of calculating the values of Erlang B formula values and its inverses, however its accuracy has been proved by calculating the values for the inputs given in Erlang B table in.33 The obtained values are exactly the same given in the Erlang B table. For the sake of clearer presentation of the results, a smaller test set (Test set 2) with 175 samples corresponding to the same probabilities as the Test set 1, but with a smaller number of values of the number of channels (25 different values of the number of channels per each probability; S from 10 to 250, step 10) has been used.
For each of the models, several ANNs with different number of neurons in the hidden layers have been trained and their accuracy for the training and test data (Test set 1) has been compared. The Levenberg–Marquardt training algorithm has been used.22 For illustration, in the MATLAB software environment training one ANN with two hidden layers containing each 15 neurons, takes about 3 minutes for 530 epochs (training iterations) which were needed to achieve the desired training goal. For training, a computer with the following characteristics is used: processor Intel Core i7 7700 [email protected], installed RAM 32 GB. The mentioned time is given as an illustration, as the number of epochs depends not only on the size of the ANN and the set training goal, but also on the initial values of the variable ANN parameters which are set randomly.
The comparison of the training and generalization (ability to predict the correct response for input data not used for the training) accuracy has been performed on the basis of the following test criteria: average test error (ATE—average test error), maximum test error (WCE—worst-case error) and Pearson-Product Moment correlation coefficient r.22 Among the trained ANNs the ANNs listed in Table 3 have been chosen as the final models giving the best compromise between the training and generalization accuracy.
TABLE 3 ANN models.
Model | 1st hidden layer: number of neurons/neuron activation functions | 2nd hidden layer: number of neurons/neuron activation functions |
Erlang B model—blocking probability prediction | 15/log sigmoid | 15/log sigmoid |
Inverse Erlang B model—traffic intensity prediction | 15/log sigmoid | 15/log sigmoid |
Inverse Erlang B model—prediction of the number of channels | 12/log sigmoid | 10/log sigmoid |
The test statistics for the Erlang B blocking probability ANN model is reported in Table 4. It can be seen that the average test errors are less than 0.15%. A high value of the worst-case error is caused by deviations at two points for the case of S = 1, which actually is not the case which will be considered often in the real use of the Erlang B formula. Very small prediction errors and the correlation coefficient values close to 1 indicate that the model has very good accuracy. As an additional illustration in Figure 5, a comparison of the blocking probability values predicted by neural model and the target ones for Test set 2 is given. The blocking probability is plotted versus the traffic intensity and the number of channels. A good match of the predicted and target values can be observed.
TABLE 4 Test statistics—blocking probability prediction.
Blocking probability prediction | |||
Training set | Test set 1 | Test set 2 | |
ATE (%) | 0.08 | 0.12 | 0.12 |
WCE (%) | 39.51 | 1.1 | 1.1 |
r | 0.999374 | 0.999987 | 0.999984 |
Table 5 contains the test statistics for the inverse Erlang B formula neural model aimed at prediction of the traffic intensity for the given values of the number of channels and the blocking probability. ATE smaller than 0.1% and WCE smaller than 0.35%, as well as very high values of the correlation coefficient, for the training set and the test sets, prove that the ANN learnt very well the training data and that very good generalization has been achieved. The traffic intensity for the Test set 2 is shown in Figure 6.
TABLE 5 Test statistics—traffic intensity prediction.
Traffic intensity prediction | |||
Training set | Test set 1 | Test set 2 | |
ATE (%) | 0.07 | 0.10 | 0.10 |
WCE (%) | 0.35 | 0.33 | 0.33 |
r | 0.999995 | 0.999997 | 0.999998 |
The test statistics of the inverse Erlang B formula neural model aimed for prediction of the number of channels is given in Table 6 shows that a good modeling accuracy has been achieved. However, the ANN has the linear function as the output neuron activation function, which means that it gives a real number as the ANN output, whereas the number of channels is an integer. Therefore, to calculate the number of channels the ANN output should be rounded to the first higher integer value and additional statistics should be considered to estimate the prediction accuracy properly. The values of the number of channels obtained by rounding the ANN output value have been compared with the target values. In Figures 7 and 8, the prediction error histograms for the training set and two test sets, respectively, are given. It can be seen that in majority of cases the number of channels is predicted correctly (Training set—98.7%, Test set 1–70.7%, and Test set 2–73.7%). If there is an error in prediction of the number of channels, it is mostly one channel of difference (Training set—23.9%, Test set 1–26.3%, and Test set 2–29.3%), which can be considered a very good result. The deviations in the training set higher than one channel of difference are mainly in the part of the input space that refers to a very small number of channels, generally less than 5, and for very small values of the call blocking probability, less than 0.1%.
TABLE 6 Test statistics—prediction of the number of channels
Prediction of the number of channels | |||
Training set | Test set 1 | Test set 2 | |
ATE (%) | 0.31 | 0.36 | 0.34 |
WCE (%) | 1.62 | 0.71 | 0.70 |
r | 0.999975 | 0.999989 | 0.999992 |
The validity of the developed neural models in terms of the model input space is determined by the ranges of the input variables used in the corresponding training sets. Therefore, the reported models are valid for the maximum number of channels of 250 and maximum blocking probability 50%. If a wider validity range is desired (higher number of channels or blocking probabilities higher than 50%), new ANNs should be trained with the corresponding data. The plots are shown in Figures 9 and 10 illustrate the input spaces of the developed models. The most interesting to be analyzed is the input space of the Erlang B formula neural model (Figure 9). The input space where the developed neural model is valid is determined by the borders of the training data, which means that the model is valid in the areas A and B. As the red line represents the border determined by the possibility of the Erlang B formula straightforward calculations, the areas A, C, and D refer to the part of the input space where the straightforward application is possible, whereas in the areas E, B, and F it is not possible to calculate directly the Erlang B formula. In the area B, that problem is overcome by the proposed neural model, providing an alternative way of straightforward calculation of the blocking probability. The input space of the inverse model for traffic intensity calculation is shown in Figure 10A. The model is valid for any combination of the number of channels and the blocking probability falling in the area A and it cannot be used for the combinations in the area B. As far as the model for the prediction of the number of channels is concerned it is valid for all combinations of the traffic intensity and the blocking probability as far as the traffic intensity is less than 500 Erl and the blocking probability is less than 50%. Once again, the shown input spaces are determined by the training data ranges, and consequently a different choice of the training data ranges may result in different shapes of the considered input spaces.
The developed neural models for the calculation of the Erlang B formula (blocking probability determination) and its inverses (traffic intensity prediction and prediction of the number of channels) exhibit very good accuracy when the results obtained by the model are compared with the target values. This confirms the assumption that artificial neural networks can be successfully used for this purpose, thus providing an efficient solution that represents an alternative to time extensive calculations of the Erlang B formula as well as a closed-form numerical solution for finding the inverses of the Erlang B formula. The models have good accuracy, they give a response instantly (for illustration, the response time for calculating values for 4550 combinations of input values is less than 0.01 s), and their implementation in different software environments is based on the application of elementary mathematical operations and the exponential function, without any additional programing structure. These features, especially the last one, make the models particularly suitable for application in software environments that have modest possibilities of implementing additional program procedures. The models are valid in the ranges of input variables determined by the value ranges of these variables used in the training set, that is, the choice of training data values directly determines the range of validity of the model.
Tables 7 and 8 summarize the advantages and limitation of different approaches in calculating Erlang B formula and its inverses.
TABLE 7 Comparison of the approaches for the Erlang B formula calculation.
Advantages | Limitations | |
Erlang formula2 |
|
|
Numerical methods14 |
|
|
Algorithm18 |
|
|
Approximation19 |
|
|
Erlang look-up tables, e.g.32 |
|
|
Proposed method (MLP neural networks) |
|
|
TABLE 8 Comparison of the approaches for inverses of Erlang B formula calculation.
Advantages | Limitations | |
Inverse Erlang B formulas |
|
|
Approximation19 |
|
|
Erlang look-up tables, for example,32 |
|
|
Proposed model (MLP neural networks) |
|
|
Zlatica Marinković: Conceptualization (equal); formal analysis (equal); investigation (lead); methodology (lead); validation (equal); writing – original draft (equal). Biljana P. Stošić: Conceptualization (equal); data curation (lead); formal analysis (equal); investigation (supporting); validation (equal); visualization (lead); writing – original draft (equal).
ACKNOWLEDGMENTSThe research presented in this article is supported by the Ministry of Science, Technological Development and Innovations of the Republic of Serbia.
CONFLICT OF INTEREST STATEMENTThe authors declare no conflicts of interest.
PEER REVIEWThe peer review history for this article is available at
The datasets used and/or analyzed during the current study available from the corresponding author on a reasonable request.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
In this article, a novel approach to calculate the Erlang B formula and its inverses is proposed. It is based on application of artificial neural networks (ANNs). Namely, ANNs are trained to calculate the Erlang B call blocking probability as well as to solve the inverse problem, i.e. to calculate the required number of channels or the maximum amount of traffic for the given call blocking probability. Comparing to direct calculation of Erlang B formula, computational efficiency is significantly increased, while the accuracy is maintained. As far as the calculation of Erlang B inverse values is considered, as there is a lack of exact mathematical formulas, the proposed approach provides the closed-form mathematical expressions.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer