Abstract. The occurrence of earthquakes has been studied from many aspects. Apparently, earthquakes occur without warning and can devastate entire cities in just a few seconds, causing numerous casualties and huge economic loss. Great effort has been directed towards being able to predict these natural disasters, and taking precautionary measures. However, simultaneously predicting when, where and the magnitude of the next earthquake, within a limited region and time, seems an almost impossible task. Techniques from the field of data mining are providing new and important information to researchers. This article reviews the use of artificial neural networks for earthquake prediction in response to the increasing amount of recently published works and presenting claims of being effective. Based on an analysis and discussion of recent results, data mining practitioners are encouraged to apply their own techniques in this emerging field of research.
Keywords: data mining, prediction, earthquake, time series
Received: September 11, 2016; accepted: December 15, 2016; available online: December 30, 2016
DOI: 10.17535/crorr.2016.0011
1. Introduction
Earthquakes can severely damage or destroy a whole region in seconds. Due to its devastating effects, earthquakes are a serious threat to modern society (e.g. the 2011 earthquake off the Pacific coast of Tohoku with a magnitude of Mw 9.0 or the earthquake in Chile in 2015 that occurred 46 km offshore from Illapel, with a magnitude of Mw 8.3). For more than 100 years, scientists have searched for successful methods for earthquake prediction or for reliable precursors, with no obvious successes [1].
Great effort was invested on the Parkfield prediction experiment [2j. The results obtained made the scientific community wonder if earthquakes could be predicted at all. This question is still a matter of discussion among experts. The Parkfield experiment supposed a change in earthquake prediction so that earthquake forecasting is today performed using probabilities and errors [3j. Other researchers still refer to natural pre-earthquake phenomena such as gravity variations, radon emanation, anomalous electric fields and changes in meteorological parameters such as temperature and relative humidity [4j.
The lack of consensus among researchers on building a time-dependent earthquake forecasting model has led the Regional Earthquake Likelihood Models (RELM) working group to generate 18 different models [5j. Moreover, there are other groups such as the Collaborator}" for the Study of Earthquake Predictability (CSEP) [6] and that of [7] in New Zealand.
Although considerable research is devoted to the science of short-term earthquake forecasting, standardization of operational procedures is in a nascent stage of development. The problem is challenging because large earthquakes cannot be reliably predicted for specific regions over time scales that span less than decades. This means that short-term forecasts of these events never project high probabilities, and their incremental benefits for civil protection, i.e., relative to long-term seismic hazard analysis, have not been convincingly demonstrated.
Earthquake prediction is a very technical field with vast literature and active research taking place all over the world. Therefore, this survey reports on the methodologies for operational earthquake forecasting either currently deployed or perhaps feasible for civil protection in the next several years, while paying special attention to those based in artificial neural networks.
The rest of the paper is structured as follows. Section 2 provides an introductory description on earthquake generation and occurrence. Next, in Section 3, fundamentals on artificial neural networks are introduced. In particular, feedforward, radial basis function and recurrent neural networks are reviewed, since they are the most used models in this research field. The most relevant works recently published relating to earthquake prediction based on artificial neural networks can be found in Section 4. Finally, conclusions derived from this study are summarized in Section 5.
2. Earthquake occurrence
Earthquakes are mainly due to active faults but can be generated by other causes such as volcanic activity, border friction between plates, man-made nuclear explosions and others factors. Damaging earthquakes are supposed to occur at depths below 50 km.
Tectonic stress within the Earth's plates breaks rocks around a fault generating an area of weakness. Tectonic stress slowly accumulates and may exceed the mass strength causing a sudden rupture along a small fault patch. This is followed by a complex dynamic process. The movement starts in the nucléation zone and spreads across the fault surface (s) generating an earthquake. Consequently, some of the accumulated energy is released in the form of seismic waves.
Depending on the magnitude and origin, earthquakes can cause displacements of the Earth's crust, landslides, liquefaction, tsunamis or even volcanic activity. There are different scales for measuring released energy. The moment magnitude, Mw-is the most reliable scale and is based on the seismic moment [9] and seismic energy [10]. However, the most frequently measure used in the mass media is the Richter scale.
Due to its destructive potential, humankind has long been searching for an earthquake prediction method. Predicting an earthquake implies stating the exact time, magnitude and location of a coming earthquake. Great effort has been made by the scientific community but, due to the random intrinsic nature of the phenomenon itself, no valid method has yet been found. It is a known fact that large earthquakes occur at faults where long-term observations have been taking place. Some large earthquakes create a spatial pattern and certain forecasts relating to magnitude and location are possible. Nevertheless, earthquakes generation is not a cyclical process due to the incomplete stress release, the variation of the rupture area and earthquake-mediated interactions along other faults. This means that the time between events can be extremely irregular. Consequently, the prediction of the time, or a relatively close time interval, of an oncoming large earthquake is still the subject of research.
3. Fundamentals of artificial neural networks
Artificial neural networks (ANN) are computational models based on an interpretation of the operation of biological neuron networks. They are universal approximators and are thus used to estimate functions, the shape of which is a priori unknown. They are analogous to the manner in which neurons transmit electric signals, separating input (capturing signal from the senses), processing (combining the inputs) and output (producing reactions to the inputs).
3.1. Feedforward neural networks
The feedforward neural network was the first type of ANN created [11, 12]. It is based on a simple design in which the connections between the units do not form cycles and the information moves in only one direction, forward, from the input nodes through the hidden nodes to the output nodes.
The development of ANN started with the conception of the single-layer pereeptron network. It raised huge interest due to its ability to recognize simple patterns. It is formed using several input neurons and one output neuron which is able to decide when the inputs belong to one of two classes. The output neuron performs a weighted sum of the inputs, substraéis a quantity called threshold and feeds the result to a step transference function. The result is 1 if the input pattern belongs to one class and -1 if it belongs to the other.
The first ANN only had a single layer due to the difficulty of finding a reasonable method to update the weights of the hidden neurons connections, as it is difficult to define the error in this case (contrary to the output neurons, in which the error is easy to compute). With the creation of the backward propagation of errors (or backpropagation) algorithm, it was finally possible to train multiple layer ANN, or FFNN. Learning by baekpropagation is performed in multi-layer pereeptrons, as in the ease of the simple pereeptron, by presenting inputs to the network. If it computes an output vector which coincides with the objective, nothing else is done. However, if there is an error (a difference between the output and the objective), the weights are adjusted to reduce it. The algorithm distributes the contribution of each weight in the generation of the output, trying to reduce the error to a minimum.
3.2. Radial basis function networks
Radial basis function networks (RBFNs) [13] perform classification by measuring the similarity of the input to examples extracted from the training set. A prototype is just one of the examples from the training set, and is stored by each RBFN neuron, which compares the input vector to its prototype, and outputs a value between 0 and 1 which is a measure of similarity. If the input is equal to the prototype, then the output of that RBF neuron will be 1. As the distance between the input and prototype grows, the response falls off exponentially towards o. When faced with the problem of classifying a new input, each neuron computes the distance (Euclidean or other) between the input and the prototype stored in it. Then, if the input more closely resembles the class A prototypes stored in the network than the class B prototypes, it is classified as class A. The prototype vector is also often called the neuron's center, since it is the value at the center of the bell curve.
RBFNs do not suffer from local minima in the same way as FFNNs. This is because the only parameters that are adjusted in the learning process are the linear mapping from the hidden layer to the output layer. This linearity ensures that the error surface is quadratic and therefore has a single easily found minimum. However, RBFNs do have the disadvantage of requiring a good coverage of the input space by the prototypes, whose centres are determined with reference to the distribution of the input data, but without reference to the prediction task. As a result, resources may be wasted on areas of the input space that are actually irrelevant to the learning task.
3.3. Recurrent neural networks
FFNNs have some limitations inherent to their design that can be overcome by changing their architecture. Recurrent neural networks (RNNs) [14, 15] suppose an improvement over the FFNN as they allow for the presence of cycles in the connections of the different neurons and the flow of the information is bi-directional, which provides a memory of recent events.
The basic architecture of a RNN is the fully recurrent network: a network of neurons, each with a directed connection to every other unit and a time-varying output. As in the case of ANNs, each connection has a modifiable weight which is updated in proportion to its derivative with respect to the error using gradient descent. Although there are other alternatives, the standard method is called baekpropagation through time, a generalization of baekpropagation for feedforward networks.
Apart from the fully recurrent network, there are many other different types of RNNs, according to their topology and training algorithm. One of the oldest flavors of the RNN is the Hopfield network, which has symmetric connections and uses Hebbian learning. The Elman network and Jordan networks are also remarkable variants of the RNN. Elman networks (also called Simple Recurrent Networks) add layer recurrent connections with delays in a context layer, which allow them to learn any dynamic input-output relationship arbitrarily well, given enough neurons in the hidden layers. Jordan networks are similar, but the context units are connected to the output layer instead of the hidden layer.
More modern variations of the RNN are in the basis of deep learning. For example, the Long Short-Term Memory (LSTM), which help preserve the error that can be propagated through time and layers. By maintaining a more constant error, they allow the RNN to continue to learn over many time steps (over 1000), thereby opening a channel to link causes and effects remotely.
4. Earthquake prediction by means of ANN
Neural networks are nowadays widely used in many different fields for pattern recognition and classification problems [16]. Nevertheless, few studies have used neural networks for earthquake prediction.
Alves [17] was one of the first authors in proposing neural networks for earthquake forecasting. It was successfully used with the seismicity of the Azores but a wide time-location window was used.
Panakkat and Adeli [18] proposed eight seismicity indicators for predicting the largest earthquake in the next month using neural networks. Later, the same authors presented in [19] a method for predicting the magnitude and location of moderate to large earthquakes based on the eight seismicity indicators defined in their previous work.
Two different approaches were used both based on the RNN. Madahizadeh and Allamehzadeh [20] studied the concentration and trend of the 2008 Sichuan earthquake. A Kohonen artificial neural network was used. Aftershocks were used as input.
A RBF neural network was applied to, again, southwest China in 2009 [21]. However, this time the authors compared their approach to that of a backpropagation feedforward neural network. Although they used seven different inputs, no feature selection was applied, thus making it difficult to figure out which ones were useful and which ones were not.
A comparison between a non-linear forecasting technique and the ANN for regions of Northeast India was performed in [22], They obtained similar results with both methods although slightly better for the ANN. However, the correlation coefficient estimated was quite low which suggested, according to the authors, that the earthquake dynamics of the region are chaotic.
In 2011, Moustra et al. [23] evaluated the accuracy of artificial neural networks for earthquake prediction using a time series of magnitude data or seismic electric signals in Greece. The average reported accuracy was 80.55% for all earthquakes, but only 58.02%, for what they considered major events (magnitude larger than 5). After performing the analysis with different inputs, they also concluded that training the ANN is a key factor that may greatly influence the quality of results.
Shah and Ghazali [24] proposed a new approach to predicting earthquake magnitude in North California (USA). A population-based algorithm called Improved Artificial Bee Colony algorithm was proposed to enhance computational issues reported in the training process of the multilayer perceptron. The results were compared to those of a standard backpropagation neural network.
The performance of ANNs in the Northern Red Sea was already assessed in [25]. In particular, the area of the Sinai Peninsula, the Gulf of Aqaba, and the Gulf of Suez were explored. The proposed model was based on the feedforward neural network with mul-tiple hidden layers. The selected inputs were location, magnitude, source depth and time stamp. Results were compared to four different predictors (normally distributed, uniformly distributed, simple moving average, curve fitting) showing that the proposed approach provided higher forecast accuracy than other evaluated algorithms.
Later in 2013, Reyes et al. [26] presented a new method for earthquake prediction based on the ANN. The input for the ANN was based on the 6-value [27], the Bath's law [28] and the Omori-Utsu's law [29]. The 6-value variations were used as input for the ANN similarly to [30]. Two kinds of predictions were provided: a) the probability that earthquakes larger than a threshold magnitude occur and b) the probability that an earthquake within a magnitude interval might happen. Four regions of Chile were analysed: Taka, Pichilemu, Santiago and Valparaiso. The average results were 0.49, 0.78, 0.65 and 0.74, respectively.
Subsequently, similar research was conducted in [31] that studied the same two seismogenic zones as analyzed in [30]. The authors compared the ANN with other well-known classifiers: the M5P algorithm [32], the support-vector machine (SVM) [33] and the Naive Bayes (NB) [34], The statistical tests showed that the best results were ob-tained using the ANN. An average result of 0.58 was obtained for the Alboran Sea and 0.71 for the Western Azores-Gibraltar Fault.
Next, Martínez-Álvarez et al. [35] studied the use of different seismicity indicators as input for the ANN. To do so, the inputs proposed in [18, 26, 31] for the zones and considered were compared. To improve the prediction, feature selection was used. Finally, a new set of inputs resulting from a selection of the works analyzed was proposed. The new proposed ANN increased the results from 0.50-0.57 to 0.69 for the Alboran Sea and from 0.59-0.71 to 0.81 for the Western Azores-Gibraltar Fault.
An application of a supervised RBF neural network and ANFIS model for earthquake occurrence in Iran can be found in [36]. The authors analyzed spatial-temporal variations in eight well-known seismicity parameters for the 2008 Qeshm earthquake. The reported results showed the existence of spatial and temporal preconditions for the occurrence of forthcoming main shocks, at least, in the case study.
Amar et al. [37] applied artificial neural networks to predict the earthquake magnitude class in 2001. They divided data into four different classes, based on different earthquake magnitude values. After retrieving data from the USGS catalog, they used a RBF neural network to analyze data from different locations in Alaska, USA. The results were compared to those obtained by the backpropagation neural network.
Different areas in southwest China were analyzed in [38]. The authors considered seven different inputs and evaluated seventeen different groups of samples by means of the Levenberg-Marquardt [39] baekpropagation algorithm to train a feedforward artificial neural network. The results were compared to the simple baekpropagation algorithm. The number of neurons in the hidden layer was identified empirically.
The work introduced in [40] evaluated the accuracy of the ANN for earthquake prediction. The authors proposed an ANN-based (EQP-ANN) method named EarthQuake Predictor. The city of Tokyo was studied. Earthquakes larger than 5.0 were analyzed for a time-window of seven days. The statistical tests showed that there are significant differences between the EQP-ANN and the other machine learning algorithms examined. The statistical tests showed average results between 0.72 and 0.80.
The North Tabriz Fault (NW Iran) has also been analyzed using the ANN [41]. In particular, the authors proposed feedforward ANN training using a genetic algorithm. A high-quality catalog was used after merging data from both the International Institute of Earthquake Engineering and Seismicity of Iran and the Iranian Seismological Center. Although the reported results were satisfactory, neither feature selection was applied nor a comparative analysis included.
Aseneio-Cortés et al. [42] systematically studied the value of seismicity indicators as input for an ANN. Five different analyses were conducted on the shape of the training and test sets, the calculation of the 6-value and adjustment of the most frequently collected indicators. The four seismic regions of Chile used by Reyes et al. [26] were analyzed. It is important to note that in this work, the value of the seismicity indicators was precisely determined for the first time, closing the gap between the work of seismologists and data mining experts.
An improved particle swarm optimization algorithm (IPSO) was proposed in [43], which was successfully combined with a backpropagation-based neural network to predict earthquake magnitude. The network was composed of three layers and used six seismicity indicators as input. A coastal area for China was the target zone for assessing its performance.
The area of Hindukush has also been analyzed by means of neural networks. In particular, the authors in [44] used the seismicity indicators proposed in [18] as inputs for the RNN and the Pattern Recognition Neural Network (PRNN) they designed. The reported results showed promise for these areas and were compared to other algorithms based on ensemble learning.
All reviewed works are summarized in Table 1.
The analysis of all these works has led to several conclusions. First, the majority of papers applying neural networks avoid applying the selection step feature. This is a issue critical since it is well-known that highly uncorrelated variables may exert a negative influence in the model generation. Second, authors are particularly focused on the magnitude, without defining a clear time interval of occurrence. Magnitudes predicted are not particularly large, therefore, no distinction between moderate and large earthquakes is made in the prediction. The spatial analysis is simply avoided in most cases and authors use, perhaps, too wide zones in order to claim precise location prediction. Finally, no statistical validation nor comparison to methods behaving intrinsically different is reported.
5. Conclusions
The application of data mining techniques to predict earthquakes has reached particularly satisfactory results in recent years. A vast variety of methods are found in literature: SVM, M5P, Naive Bayes, KNN, J48, Random Forest, LPBoost Ensemble. However, the quality of the outputs generated by artificial neural networks stands out. In this survey, the successful application of such methods was reported for many active seismic areas, i.e., Chile, Japan, India, China, Pakistan, USA or the Iberian Peninsula, Greece or Portugal. Most of them are, on the contrary, focused on predicting magnitude, for a given horizon of prediction and limited area. Future research should be directed to addressing the challenging task of simultaneously predicting when, where and the magnitude of the next earthquake. Another flaw of most of the papers reviewed here concerns the magnitude that is predicted, which in general, is not particularly large. Accordingly, the larger the earthquake, the more difficult it is to make a prediction. Therefore, there is an impetus to explore the recently developed imbalanced classification models in order to improve prediction quality.
Acknowledgement
The authors would like to thank the Spanish Ministry of Economy and Competitiveness and the Junta de Andalucía for the support under projects TIN2014-55894-C2-R and P12-TIC-1728, respectively. This work has been also partially funded by Min- isterio de Economía y Competitividad, Gobierno de España, through a Ramón y Cajal grant awarded to Dr. Aznarte (reference: RYC-2012-11984).
References
[1] Adeli, H. and Panakkat, A. (2009). A probabilistic neural network for earthquake magnitude prediction. Neural Networks, 22, 1018-1024.
[2] Alarifi, A. S. N., Alarifi, N. S. N. and Al-Humidan, S. (2012). Earthquakes magnitude predication using artificial neural network in northern Red Sea area. Journal of King Saud University - Science, 24, 301-313.
[3] Alves, E. I. (2006). Earthquake forecasting using neural networks: results and future work. Nonlinear Dynamics, 44(1-4), 341-349.
[4] Amar, E., Khattab, T. and Zad, F. (2014). Intelligent earthquake prediction system based on neural network. International Journal of Environmental, Chemical, Ecological, Geological and Geophysical Engineering, 8(12), 874-878.
[5] Anderson, J. A. (1995). An Introduction To Neural Networks. MIT Press.
[6] Asencio-Cortés, G., Martinez-Âlvarez, F., Morales-Esteban, A. and Reyes, J. (2016). A sensitivity study of seismicity indicators in supervised learning to improve earthquake prediction. Knowledge-Based Systems, 101, 15-30.
[7] Asencio-Cortés, G., Martinez-Âlvarez, F., Morales-Esteban, A. and Troncoso, A. (2015). Medium-large earthquake magnitude prediction in Tokyo with artificial neural networks. Neural Computing and Applications, doi: 10.1007/s00521-015-2121-7 [Accessed 21/11/2015].
[8] Asim, K. M., Martinez-Âlvarez, F., Basit, A. and Iqbal, T. (2015). Earthquake magnitude prediction in Hindukush region using machine learning techniques. Natural Hazards, doi: 10.1007/sll069-016-2579-3 [Accessed 21/11/2015].
[9] Bakun, W. H., Aagaard, B., Dost, B., Ellsworth, W. L., Hardebeck, J. L., Harris, R. A., Ji, C., Johnston, M. J. S., Langbein, J., Lienkaemper, J. J., Michael, A. J., Murray, J. R., Nadeau, R. M., Reasenberg, P. A., Reichte, M. S., Roeloffs, E. A., Shakal, A., Simpson, R. W. and Waldhauser, F. (2005). Implications for prediction and hazard assessment from the 2004 Parkfield earthquake. Nature, 437, 969-974.
[10] Bath, M. (1965). Lateral inhomogeneities in the upper mantle. Tectonophysics, 2, 483-5145.
[11] Bengio, Y., Simard, P. and Erasconi, P. (1994). Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2), 157-166.
[12] Bishop, C. M. (1985). Neural Networks for Pattern Recognition. Oxford Press.
[13] Cortes, C. and Vapnik, V. (1995). Support-vector networks. Machine Learning, 20(3), 273-297.
[14] Field, E. H. (2007). Overview of the working group for the development of Regional Earthquake Likelihood Models (RELM). Seismological Research Letters, 78(1), 7-16.
[15] Gelter, R. J. (1997). Earthquake prediction: a critical review. Geophysical Journal International, 131(3), 425-450.
[16] Gerstenberger, M. C. and Rhoades, D. A. (2010). New Zealand earthquake forecast testing centre. Pure and Applied Geophysics, 167, 877-892.
[17] Gutenberg, B. and Richter, C. F. (1994). Frequency of earthquakes in California. Bulletin of the Seismological Society of America, 34, 185-188.
[18] Hand, D. J. and Yu, K. (2001). Idiot's Bayes - not so stupid after all? International Statistical Review, 69(3), 385-399.
[19] Hanks, T. C. and Kanamori, H. (1979). A moment magnitude scale. Journal of Geophysical Research, 84, 23480-23500.
[20] Hochreiter, S. and Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9, 1735-1780.
[21] Ide, S. and Beroza, G. C. (2011) Does apparent stress vary with earthquake size? Geophysical Research Letters, 28(17), 3349-3352.
[22] Li, C. and Liu, X. (2016). An improved PSO-BP neural network and its application to earthquake prediction. In Proceedings of the Chinese Control and Decision Conference, 3434-3438.
[23] Madahizadeh, R. and Allamehzadeh, M. (2009). Prediction of aftershocks distribution using artificial neural networks and its application on the May 12, 2008 Sichuan Earthquake. Journal of Seismology and Earthquake Engineering, 3(11), 111-120.
[24] Martínez-Álvarez, F., Reyes, J., Morales-Esteban, A. and Rubio-Escudero, C. (2013). Determining the best set of seismicity indicators to predict earthquakes. Two case studies: Chile and the Iberian Peninsula. Knowledge-Based Systems, 50, 198-21013.
[25] Morales-Esteban, A., Martínez-Álvarez, F. and Reyes, J. (2013). Earthquake prediction in seismogenic areas of the Iberian Peninsula based on computational intelligence. Tectonophysics, 593, 121-134.
[26] Morales-Esteban, A., Martínez-Álvarez, F., Troncoso, A., de Justo, J. L. and RubioEscudero, C. (2010). Pattern recognition to forecast seismic time series. Expert Systems with Applications, 37(12), 8333-8342.
[27] Moustra, M., Avraamides, M. and Christodoulou, C. (2011). Artificial neural networks for earthquake prediction using time series magnitude data or seismic electric signals. Expert Systems with Applications, 38(12), 15032-15039.
[28] Mubarak, M. A., Riaz, M. S., Awais, M., Jilani, Z., Ahmad, N., Irfan, M., Javed, F., Alam, A. and Sultan, M. (2009) Earthquake prediction: a global review and local research. Proceedings of the Pakistan Academy of Sciences, 46(4), 233-246.
[29] Omori, F. (1902). Macroseismic measurements in Tokyo, II and III. Earthquake Investigation Communications, 11, 1-95.
[30] Panakkat, A. and Adeli H. (2007). Neural network models for earthquake magnitude prediction using multiple seismicity indicators. International Journal of Neural Systems, 17(1), 13-33.
[31] Panakkat, A. and Adeli, H. (2009). Recurrent neural network for approximate earthquake time and location prediction using multiple sesimicity indicators. ComputerAided Civil and Infrastructure Engineering, 24, 280-292.
[32] Pondrelli, S., Salimbeni, S., Morelli, A., Ekström, G., Olivieri, M. and Boschi, E. (2009). Seismic moment tensors of the April 2009, L'Aquila (Central Italy), earthquake sequence. Geophysical Journal International, 180(1), 238-242.
[33] Reyes, J., Morales-Esteban, A. and Martínez-Álvarez, F. (2013). Neural networks to predict earthquakes in Chile. Applied Soft Computing, 13(2), 1314-1328.
[34] Shah, F. M., Hasan, M. K., Hoque, M. M. and Ahmmed, S. (2010). Architecture and weight optimization of ann using sensitive analysis and adaptive particle swarm optimization. International Journal of Computer Science and Network Security, 10(8), 103-111.
[35] Srilakshmi, S. and Tiwari, R. K. (2009). Model dissection from earthquake time series: a comparative analysis using nonlinear forecasting and artificial neural network approach. Computers and Geosciences, 35, 191-204.
[36] Strumillo, P. and Kaminski, W. (2003). Radial basis function neural networks: theory and applications. Advances in Soft Computing, 19, 107-119.
[37] Tiampo, K. F. and Shcherbakov, R. (2012). Seismicity-based earthquake forecasting techniques: Ten years of progress. Tectonophysics, 522-523, 89-121.
[38] Wang, Y., Chen, Y. and Zhang, J. (2009). The application of RBF neural network in earthquake prediction. In Franz Rothlauf (ed.). Proceedings of the International Conference on Genetic and Evolutionary Computing, pp. 465-468. Montreal, QC, Canada, July 8-12, 2009. New York: ACM.
[39] Wang, Q., Jackson, D. D. and Kagan, Y. Y. (2009). California earthquakes, 1800-2007: A unified catalog with moment magnitudes, uncertainties, and focal mechanisms. Seismological Research Letters, 80(3), 446-457.
[40] Wang, Y. and Witten, I. H. (1997). Induction of model trees for predicting continous classes. In Maarten van Someren, Gerhard Widmer (eds.). Proceedings of the European Conference on Machine Learning, pp. 128-137. Prague, Czech Republic, April 23-25, 1997. Springer-Verlag London.
[41] Zakeri, N. S. S. and Pashazadeh, S. (2015). Application of neural network based on genetic algorithm in predicting magnitude of earthquake in North Tabriz Fault (NW Iran). Current Research, 109(9), 1715-1722.
[42] Zamani, A., Sorbi, M. R. and Safavi, A. A. (2013). Application of neural network and ANFIS model for earthquake occurrence in Iran. Earth Science Informatics, 6(2), 71-85.
[43] Zechar, J. D. and Zhuang, J. (2010). Risk and return: evaluating reverse tracing of precursors earthquake predictions. Geophysical Journal International, 182(3), 1319- 1326.
[44] Zhang, H. Y. and Geng, Z. (2014). Novel interpretation for Levenberg-Marquardt algorithm. Computer Engineering and Applications, 49(19), 5-6.
[45] Zhou, F. and Zhu, X. (2014). Earthquake prediction based on LM-BP neural network. Lecture Notes in Electrical Engineering, 270, 13-20.
Emilio Florido1, José L. Aznarte2, Antonio Morales-Esteban3 and Francisco Mart Inez- Alvarez1,*
1 Division of Computer Science, Pablo de Olavide University, ES-fl013 Seville, Spain
E-mail: {[email protected], [email protected]}
2 Department of Artificial Intelligence, Universidad Nacional de Educación a
Distancia-UNED, Spain
E-mail: ([email protected])
3 Department of Building Structures and Geotechnical Engineering, University of Seville,
Spain
E-mail: {[email protected])
* Corresponding author.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Copyright Croatian Operational Research Society (CRORS) 2016
Abstract
The occurrence of earthquakes has been studied from many aspects. Apparently, earthquakes occur without warning and can devastate entire cities in just a few seconds, causing numerous casualties and huge economic loss. Great effort has been directed towards being able to predict these natural disasters, and taking precautionary measures. However, simultaneously predicting when, where and the magnitude of the next earthquake, within a limited region and time, seems an almost impossible task. Techniques from the field of data mining are providing new and important information to researchers. This article reviews the use of artificial neural networks for earthquake prediction in response to the increasing amount of recently published works and presenting claims of being effective. Based on an analysis and discussion of recent results, data mining practitioners are encouraged to apply their own techniques in this emerging field of research.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer