1. Introduction
Typhoons are one of the most destructive natural disasters in coastal areas of China and have a great impact on economic development, navigation safety, infrastructure construction and people’s lives and properties in coastal areas. Tropical cyclones and the heavy rains that accompany them are among the deadliest and most destructive disasters on Earth, especially in coastal areas [1,2]. More than seven typhoons strike the southeast coast of China each year according to the China Meteorological Administration (CMA). Among them, typhoons Nina and LeKima caused extensive economic loss and affected millions of people [3,4,5]. It is of great significance to classify and estimate typhoon intensity accurately to ensure the safety of both people and property. According to geostationary satellite imagery analysis, different typhoon levels have different cloud features, which can be used to help recognize typhoon intensity [6]. For research on typhoon intensity recognition, there are subjective empirical methods and numerical simulation methods. Subjective experience methods are affected by the nature of the typhoon. Numerical models are more efficient and systematic than subjective empirical methods in terms of calculations, and numerical models account for the influence of typhoon parameters on identification. However, numerical models are limited in predicting some extreme events due to the data collection and processing times [6,7]. Therefore, there is a critical need to establish a high-precision typhoon intensity recognition model to study typhoons.
Recently, as a result of continuous breakthroughs in image processing and deep learning applications, many scholars have introduced recognition systems and deep learning models into different disciplines. For example, Duan et al. [8] compared four ML methods, i.e., random forest (RF), support vector machine (SVM), convolutional neural network (CNN) and residual neural network (ResNN) in classifying the signals obtained from the recorded seismic dataset. Liu et al. [9] developed a software system for big data management, which fulfills the tasks of collecting, transmitting, classifying, screening, managing and analyzing, based on more than 80,000 sets of standard geo-material physico -mechanical data. Some progress has also been made in identifying satellite cloud images using different cloud cluster features [10,11]. Zhou et al. [12] accurately identified the eye and cloud wall of typhoons and used the GC-LSTM model to accurately recognize and predict typhoon intensity. Zhao et al. [13] proposed a real-time typhoon eye detection method based on deep learning with satellite cloud images, which provided important data for detecting real typhoon information. Wang et al. [14] designed several models with different inputs and parameters and found that their CNN models were robust when estimating the TC intensity from geostationary satellite images. Zhang et al. [15] proposed a novel tropical cyclone intensity classification and estimation model using infrared geostationary satellite images from the Northwest Pacific Ocean basin and a cascading deep convolutional neural network. Rüttgers et al. [16] used past satellite images to generate one-step images. Nevertheless, there are existing issues in terms of low feature extraction of tropical depressions, tropical storms or severe tropical storms and poor recognition accuracy that need to be addressed [12,15,17].
In terms of visual computing, the core challenges include the acquisition, processing, analysis and rendering of visual information (mainly images and video) [18]. Shallow neural networks have problems such as poor feature extraction and classification ability, while deep neural networks have problems such as difficult development and long training times [19,20]. Shallow neural network can meet some current requirements, but it may not be effective in identifying satellite cloud images. Given enough time, deep neural networks may have better results, but waste computational resources. The most straightforward way to improve network performance is to increase the network depth, which increases the number of parameters and the difficulty in developing the network. In addition, the network becomes more prone to overfitting, and the demand for computing resources increases significantly. Recently, much research has been carried out to improve algorithms. Simonyan et al. [21] found that by using a 3 × 3 convolution kernel and increasing the network depth to 16–19 weight layers, they could achieve a significant improvement over existing network framework. He et al. [22] proposed a simpler and more accurate residual learning framework to address the problem of gradient disappearance caused by deep networks. Tan et al. [23] proposed a new mixed depthwise convolution (MixConv) model that naturally combines multiple kernel sizes into a single convolution and improves the accuracy and efficiency of existing Mobile Nets for both ImageNet classification and COCO object detection. Szegedy et al. [24] found that by using smaller filters to extract local features, such as replacing a 5 × 5 convolution kernel with the superposition of two 3 × 3 convolution kernels, the computational efficiency could be improved. However, few studies have determined how many small convolution kernels are necessary to replace large kernels to improve the computational efficiency and recognition accuracy.
Here, we focus on solving two problems, the poor feature extraction of satellite cloud images and the elaboration of a novel model to improve algorithms. The model was trained and validated using 25 years of typhoon sample images. The results indicated that the model can be used to extract relatively complex information from satellite cloud images and accurately identify and estimate typhoon intensity, especially for tropical depressions, tropical storms and severe tropical storms.
The remainder of this paper is organized as follows. In Section 2, we briefly describe the data sources and preprocessing, and introduce the method. In Section 3, first, we choose the optimal convolutional kernel by comparing various convolutional kernel performances according to LeNet-5 model characteristics. Next, we build a new framework based on the advantages of the VGG16 model and elaborate a series of models based on the new framework. Then, we compare various model performances and present typhoon intensity classification and estimation effects. Finally, the conclusions are presented in Section 4.
2. Data and Methods
2.1. Data
In the experiment, satellite images of typhoons in the Northwest Pacific Ocean and the South China Sea from 1995–2020 were examined. The cloud images were provided by Kochi University (
Satellite cloud image processing: First, a pixel size of 512 × 512 was selected for the input information based on the longitude and latitude of the typhoon, as shown in Figure 1. The typhoon wind-speed data from CMA are used as the images’ wind speed label. Next, the median filter was used to remove noise from the infrared image while effectively retaining the edge information. Afterward, the satellite cloud images were enhanced to obtain more reliable experimental data. The images are single channel images that have been expanded to three channels to facilitate the subsequent model verification. Then, the number of images was increased through data expansion methods such as cutting, random rotation and offset. All images were normalized to speed the convergence of the neural network training. Finally, a total of 13,200 cloud images were randomly divided into a training set with 7920 images, a validation set with 2640 images and a test set of 2640 images. There was no overlap between the satellite cloud images of the test set and the verification set. The detailed dataset processing is shown in Figure 2. The dataset was coded with one-hot, and the real label of each sample was also a one-hot label. The typhoon intensity classification standards are based on the National Standard of Tropical Cyclone Classification (GB/T 19201-2006), and one-hot labels are shown in Table 1. Each category of satellite cloud images is shown in Figure 3.
This experimental model was based on MobaXterm, and the processor was the computing node of PARATERA HPC CLOUD. The hardware configuration was a 2-channel 16 core, E5-2680V3@ 2.5 GHz, 64 GB memory, using the open-source Keras deep learning framework.
2.2. Methods
2.2.1. Construction of the Models
The detailed experimental steps are as follows. First, the network topology architecture of the LeNet-5 model was used in the framework with five convolution layers, five pooling layers and three fully connected layers. When using the LeNet-5 model, image matching is incorporated into the network topology architecture and shared weight is used to reduce the number of training parameters, resulting in a simpler and more adaptable network structure. This model can be used to obtain an effective representation of the original image, allowing the identification of the visual rules directly from the original pixels without too much preprocessing. If a larger convolution kernel is used, more information may be lost, and typhoon intensity cannot be accurately identified. We used 2 × 2, 3 × 3, 5 × 5 and 7 × 7 convolution kernels to determine the optimal kernel size based on this framework. These models are named Ty2-CNN Ty3-CNN, Ty5-CNN, and Ty7-CNN. See Appendix A for a detailed introduction of the LeNet-5 model.
Then, based on the advantages of the VGG16 model, we built a new framework combining two-layer convolution and one-layer pooling with one-layer convolution and one-layer pooling. See Appendix B for a detailed introduction of the VGG16 model. The VGG16 model uses multiple smaller convolution kernels (3 × 3) to replace the larger convolution kernel on a layer to reduce the number of parameters, perform nonlinear mapping, and increase the fitting and expression ability of the network. The size of the receptive field obtained by two 3 × 3 convolution kernel stacks is equivalent to that of a 5 × 5 convolution kernel [21]. In line with the discovery that using smaller filters to extract local features can improve the computational efficiency, a series of hybrid convolution models, typhoon intensity classification networks (TICAENets), were developed to determine the best combination and achieve better recognition performance.
2.2.2. Model Parameters
Stochastic gradient descent (SGD) optimizer:
(1)
(2)
where is the parameter to be updated, is the gradient update direction, is the position of the step , is the step length, is the stochastic gradient, and .Cross-entropy loss function, which is also known as the log loss:
(3)
where is the output of the previous layer, and is the prediction output of the model in the cross-entropy loss function.The predicted output represents the probability that the current sample label is 1 and is defined as follows in Equation (4):
(4)
The probability of the current sample tag being 0 is defined using Equation (5):
(5)
Equations (4) and (5) were substituted into Equation (6).
(6)
(7)
(8)
The loss function for a single sample is defined as Equation (9):
(9)
If the loss function of N samples is calculated and N losses are superimposed, the cross-entropy loss function of N samples can be defined by Equation (10):
(10)
2.2.3. Model Evaluation Index
Accuracy: The proportion of all correctly judged results in the total observed values of the classification model, as defined in Equation (11):
(11)
where is the true positive, which indicates that the real category of the sample is positive and the model recognition result is positive; is the false negative, which indicates that the real category of the sample is positive and the model recognition result is negative; is the false positive, which indicates that the real category of the sample is negative and the model recognition result is positive; and is the true negative, which indicates that the real category of the sample is negative and the model recognition result is negative.Precision: The proportion of all outcomes where the model prediction is positive, as defined in Equation (12):
(12)
Sensitivity/Recall: In all outcomes where the true value is positive, the model predicts the proportion of pairs, as shown in Equation (13):
(13)
F1 Score: The F1 score integrates the precision and sensitivity output results, as shown in Equation (14):
(14)
Mean absolute error (MAE):
(15)
(16)
Root mean square error (RMSE):
(17)
where and represent the probability, and V and represent corresponding wind speed of category a and b, respectively. is the wind speed corresponding to the image, and represents the typhoon intensity estimated by the model; is the number of samples.3. Results
3.1. Different Performances with Various Convolution Kernel Sizes
Based on the advantages of LeNet-5, we used the framework with five convolution layers to elaborate a series of models by using various convolution kernel sizes, named Ty2-CNN, Ty3-CNN, Ty5-CNN, and Ty7-CNN. The feature extraction results using these models are shown in Figure 4. The Ty2-CNN model was inefficient in extracting the typhoon features in Conv-3. The Ty5-CNN model retained complete cloud information, including typhoon eyes, cloud walls and dense cloud areas, in Conv-3, and the consistency between the subsampling image and the input image was higher than that of the other kernels. Overall, the 5 × 5 convolution kernel extracted satellite cloud image information more effectively than the other kernels. This result is consistent with that of Zhou et al. [12], who found that the 5 × 5 convolution kernel had an obvious feature extraction effect for satellite cloud image characteristics.
The validation and test set accuracy of each model, together with the LeNet-5 model, are shown in Table 2. On the test dataset, the accuracy rate of LeNet-5 was 86.55%. When the model used the 7 × 7 convolution kernel, the model accuracy rate was 91.63%; when the model used the 5 × 5 convolution kernel, the model accuracy rate was 95.27%. The overall model accuracy on the test set was lower than that on the validation set. The Ty5-CNN model had better recognition performance than the other kernels. This also validated the above conclusions that the 5 × 5 convolution kernel can extract satellite cloud image information more effectively than the other kernels.
3.2. Establishment of a Set of Models and Comparative Analysis of the Results
Some small convolution kernels were used to increase the receptive field, maximizing the mapping area between the feature image obtained and the previous feature image, reducing the feature loss and retaining the image information [21,24]. Two 3 × 3 convolution kernels were used in place of the 5 × 5 convolution kernel in the Ty5-CNN model based on the characteristics of the VGG16 model. A series of hybrid convolution model typhoon intensity classification and estimation networks (TICAENets) named Ty1-TICAENet, Ty2-TICAENet, Ty3-TICAENet, Ty4-TICAENet and Ty5-TICAENet were successfully utilized by combining two-layer convolution and one-layer pooling with one-layer convolution and one-layer pooling in different ways. The convolution kernel parameters of those models are shown in Table 3. The Ty1-TICAENet model used two 3 × 3 convolution kernels in Conv-5, and the Ty2-TICAENet model used two 3 × 3 convolution kernels in Conv-4 and Conv-5. The network depth of the framework was larger than that of the LeNet-5 model and lower than that of the VGG16 model, which reduced the number of model training parameters and the memory footprint. Only the detailed structure of the Ty2-TICAENet network model is shown in this paper, as shown in Table 4. The overall architecture of the proposed model is shown in Figure 5.
Table 5 compares the proposed model parameters with other existing models, including LeNet-5, VGG16 and AlexNet. Params means the number of major network parameters and memory means the required memory. The term Flops (floating point operations) refers to floating point operations performed by the network. MemR + W indicates that the sum of the size read from memory and written to memory while the network is running. The proposed model has relatively smaller params, Flops and MemR + W, which means that the model requires less memory and runs faster when compared with VGG16 and AlexNet. It has smaller values of Params, Flops and MemR + W(MB) than LeNet-5, but almost the same memory. The model floating-point calculations are smaller and the size read from memory and written to memory is smaller.
The model training and validation set curves are shown in Figure 6. At the beginning of training, the loss value of the model decreased, and the accuracy of the model increased greatly. After a certain learning time, the loss and accuracy tended to stabilize, indicating that all the models were stable. To further validate the results, the five models were compared in terms of accuracy, precision, sensitivity/recall and F1 score.
The results of the different models are shown in Table 6. All the new models had better accuracies, precisions and sensitivities than the LeNet-5 model. The Ty4-TICAENet model had better recognition performance than the Ty3-TICAENet model, and the Ty2-TICAENet model had the best recognition performance, with an accuracy, precision and sensitivity of 97.12%, 97.13% and 97.12%, respectively. Compared with the LeNet-5 model, the accuracy and sensitivity of the Ty2-TICAENet model were improved by 10.57%. Compared with the Ty-VGG16 model, the model was improved by 1.89%. Based on this result, we can conclude that using two 3 × 3 convolution kernels to replace a 5 × 5 convolution kernel improves the classification accuracy. However, the effect of two 3 × 3 convolution kernels differs from the effect of one 5 × 5 convolution kernel in other ways. Better performance can be obtained by replacing large convolution kernels with small kernels in the optimal combination. In this paper, the VGG16 transfer learning model named Ty-VGG16 was used for comparison with other model results.
Figure 7 shows the F1 scores of the different models. If the abscissa and the ordinate of the round approach 1, the model has better precision. If the value of the round is larger, the model obtains better output. Because the LeNet-5 model’s precision, sensitivity and F1 score were all less than 0.87, the LeNet-5 model’s indicators are not displayed. The Ty4-TICAENet model had better recognition performance than the Ty3-TICAENet model. The Ty2-TICAENet model had the highest recognition accuracy rate. This indicates that replacing large convolution kernels with smaller convolution kernels does not further improve model performance. The framework replaced two large convolutional kernels with four small convolutional kernels and achieved better performance in Conv-4 and Conv-5. The Ty2-TICAENet model was named the TICAENet model.
3.3. Classification of Typhoon Intensity by Using the TICAENet Model
Figure 8 illustrates the learned abstract features in TICAENet. After maxpool-5 subsampling, the model can efficiently extract the typhoon eyes, cloud walls and dense cloud area features, indicating that the TICAENet model has a high processing efficiency for satellite cloud images. After maxpool-5, the TICAENet model inputs the image to the fully connected layer, and the result is output by the sigmoid function.
A confusion matrix, which was used to analyze the classification reliability of the different typhoon intensities, is an important indicator for judging the results of the model. Table 7 shows the confusion matrix of the TICAENet model. The classification precision of the TICAENet model for each typhoon intensity reached 96.60%. The classification precision for tropical storms, severe tropical storms and super typhoons reached 97.10%. The recognition accuracy of tropical storms was the lowest, which may be due to the lack of specific cloud structures during typhoon formation. Additionally, tropical storms are loosely structured and can have diverse forms. When a typhoon forms, it has a distinct eye area and spiral cloud band, which can be better extracted from satellite cloud image features to identify the intensity of the typhoon.
Table 8 displays the proposed model’s results for the various typhoon intensities of 241 samples from 2019. Compared with the other typhoon intensities, the severe tropical storm samples, based on the MAE value of 0.4 and the RMSE of 0.1, achieved the best estimation effect when this model was utilized. The model’s ability to estimate strength essentially reached the level of a business reference. The model performs the worst on tropical depression and tropical storm samples, which may be due to a lack of specific cloud structures during typhoon formation.
Table 9 shows a comparison of typhoon intensity prediction results based on the proposed model and other methods. The typhoon intensity estimation MAE is 4.78 m/s and RMSE is 6.11 m/s. In detail, the proposed model achieved better performance with improvements of 18.98% and 20.65% when compared to the statistical method. In comparison to the results using DAVT (deviation-angle variance technique), K-nearest-neighbors algorithm and linear regression of IR features, TICAENet is more useful for estimating intensity, and the estimation accuracy is 8.26%, 6.43% and 10.01% larger, respectively. The TICAENet model has high research value and application prospects for typhoon intensity estimation. Therefore, the model in this paper is more advantageous. In summary, the TICAENet model developed can reliably recognize and estimate typhoon intensity.
4. Discussion and Conclusions
With the development of deep learning, many scholars use it as a tool in the field of marine meteorology to realize the classification and recognition of satellite cloud images. The core challenge of visual computing is the acquisition, processing, analysis and presentation of visual information, primarily images and video. At present, deep learning has low accuracy in the identification of typhoon intensity and tropical depression, and it is difficult to extract cloud image features. Studies have shown that feature information can be better extracted by using smaller convolution kernels instead of a larger convolution kernel. However, few studies have determined how many small convolution kernels are needed to replace large ones to improve computational efficiency and recognition accuracy. Therefore, it is critical to develop an accurate model for identifying and evaluating typhoon intensity.
In this paper, we first used the framework of five convolution layers with the network topology architecture of the LeNet-5 model to elaborate a series of models by using various convolution kernel sizes. The 5 × 5 convolution kernel extracted satellite cloud image information more effectively than the other kernels. Then, based on the characteristics of the VGG16 model and replacing a 5 × 5 convolution kernel with two 3 × 3 convolution kernels, we built a new framework combining two-layer convolution and one-layer pooling with one-layer convolution and one-layer pooling. A series of hybrid convolution models were successively adopted to determine the best combination and achieve better recognition performance by combining the advantages of the LeNet-5 model with the VGG16 model. These models had an appropriate network depth and efficiently processed these problems based on satellite images of the Northwest Pacific Ocean and the South China Sea. The experimental results showed that the TICAENet model had better recognition performance, and it can efficiently extract sensitive features such as typhoon eyes, cloud belts and dense cloud areas. The model indicators showed that the accuracy, precision, sensitivity and F1 score of the TICAENet model were 97.12%, 97.13%, 97.12% and 97.12%, respectively, which are higher than those of the other models. Compared with the LeNet-5 model, the accuracy and sensitivity of the model were improved by 10.57%. On the basis of the TICAENet model, the maximum probability linear estimation method was used to achieve a quantitative estimation of the typhoon wind speed, with an MAE of 4.78 m/s and RMSE of 6.11 m/s. The estimation accuracy is 18.98% and 20.65% higher than that of the statistical method respectively. Compared with DAVT, K-nearest-neighbors algorithm and linear regression of IR features, TICAENet is more useful for estimating intensity, and its estimation accuracy is 8.26%, 6.43% and 10.01% higher respectively. The model takes less memory and runs faster. It indicates that the TICAENet model has high research value and application prospects for typhoon intensity estimation. Therefore, it is suggested from this study that the TICAENet model is better for identifying typhoon intensity using satellite images, providing a solid basis for relevant organizations to make decisions and laying the foundation for subsequent typhoon intensity classification, identification and prediction.
Although we performed many experiments with various models, there were some deficiencies due to the limitations of objective factors: (1) the experiment lacked an analysis of actual typhoon detection cases; (2) actual application analysis with a large amount of data is not involved. In the future, we will continue in-depth research in these two areas. Additionally, to further improve the model performance, we will focus on combining an RNN (recurrent neural network) with the model and adjust the network model.
S.J.: data acquisition; data analysis; data interpretation; study design; writing—original draft; L.T.: writing—review and editing. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
The best tropical cyclone track data for the experiment are downloaded from the website of the China Meteorological Administration (CMA) (
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 1. The original satellite cloud image and cut-out image of the strong typhoon Jelawat from JMBSC.
Figure 3. Meteorological satellite cloud image samples after cutting based on the typhoon location for six TC categories.
Figure 4. Feature extraction results of different convolution kernels for a super typhoon.
Typhoon intensity standard level and one-hot label.
Typhoon Level | Maximum Wind Speed (kt) | Maximum Wind Speed (m·s−1) | One-Hot Label |
---|---|---|---|
Tropical depression | 38.9–61.6 | 10.8–17.1 | 100000 |
Tropical storm | 61.7–87.8 | 17.2–24.4 | 010000 |
Severe tropical storm | 87.9–117.4 | 24.5–32.6 | 001000 |
Typhoon | 117.5–149.0 | 32.7–41.4 | 000100 |
Severe typhoon | 149.1–183.2 | 41.5–50.9 | 000010 |
Super typhoon | ≥183.3 | ≥51.0 | 000001 |
Comparison of model accuracy with different kernel sizes.
Model | Validation (%) | Test (%) |
---|---|---|
LeNet-5 | 90.50 | 86.55 |
Ty2-CNN | 95.88 | 92.77 |
Ty3-CNN | 93.00 | 89.39 |
Ty5-CNN | 97.80 | 95.27 |
Ty7-CNN | 92.50 | 91.63 |
Convolution kernel parameters of various models.
Model | Conv-1 | Conv-2 | Conv-3 | Conv-4 | Conv-5 |
Ty5-CNN | (5 × 5) | (5 × 5) | (5 × 5) | (5 × 5) | (5 × 5) |
Ty1-TICAENet | (5 × 5) | (5 × 5) | (5 × 5) | (5 × 5) | (3 × 3,3 × 3) |
Ty2-TICAENet | (5 × 5) | (5 × 5) | (5 × 5) | (3 × 3,3 × 3) | (3 × 3,3 × 3) |
Ty3-TICAENet | (5 × 5) | (5 × 5) | (3 × 3,3 × 3) | (3 × 3,3 × 3) | (3 × 3,3 × 3) |
Ty4-TICAENet | (5 × 5) | (3 × 3,3 × 3) | (3 × 3,3 × 3) | (3 × 3,3 × 3) | (3 × 3,3 × 3) |
Ty5-TICAENet | (3 × 3,3 × 3) | (3 × 3,3 × 3) | (3 × 3,3 × 3) | (3 × 3,3 × 3) | (3 × 3,3 × 3) |
Network structure of the Ty2-TICAENet model.
Ty2-TICAENet | Input Shape | Out Shape | Kernel | Stride | Params |
---|---|---|---|---|---|
Conv-1 | [3,224,224] | [6,220,220] | [5,5] | 1 | 456.0 |
Maxpool-1 | [6,220,220] | [6,110,110] | [2,2] | 2 | 0.0 |
Bn-1 | [6,110,110] | [6,110,110] | - | 0 | 12.0 |
Conv-2 | [6,110,110] | [16,106,106] | [5,5] | 1 | 2416.0 |
Maxpool-2 | [16,106,106] | [16,53,53] | [2,2] | 2 | 0.0 |
Bn-2 | [16,53,53] | [16,53,53] | - | 0 | 32.0 |
Conv-3 | [16,53,53] | [24,49,49] | [5,5] | 1 | 9624.0 |
Maxpool-3 | [24,49,49] | [24,24,24] | [2,2] | 2 | 0.0 |
Bn-3 | [24,24,24] | [24,24,24] | - | 0 | 48.0 |
Conv-4 | [24,24,24] | [24,22,22] | [3,3] | 1 | 5208.0 |
Conv-5 | [24,22,22] | [32,20,20] | [3,3] | 1 | 6944.0 |
Maxpool-4 | [32,20,20] | [32,10,10] | [2,2] | 2 | 0.0 |
Bn-4 | [32,10,10] | [32,10,10] | - | 0 | 64.0 |
Conv-6 | [32,10,10] | [32,8,8] | [3,3] | 1 | 9248.0 |
Conv-7 | [32,8,8] | [64,6,6] | [3,3] | 1 | 18,496.0 |
Maxpool-5 | [64,6,6] | [64,3,3] | [2,2] | 2 | 0.0 |
Bn-5 | [64,3,3] | [64,3,3] | - | 0 | 128.0 |
Fc-1 | [64,3,3] | [1024] | - | - | 590,848.0 |
Fc-2 | [1024] | [512] | - | - | 524,800.0 |
Fc-3 | [512] | [6] | - | - | 3078.0 |
Softmax | [6] | [6] | - | - | 0 |
Total | - | - | - | - | 1,171,402.0 |
Comparison different model parameters.
Models | Params (Million) | Memory (MB) | Flops (Million) | MemR + W(MB) |
---|---|---|---|---|
LeNet-5 | 46.65 | 2.42 | 96.18 | 182.17 |
VGG16 | 134.28 | 55.10 | 1402.29 | 623.00 |
AlexNet | 46.03 | 4.76 | 901.31 | 185.59 |
Ty2-TICAENet | 1.17 | 2.65 | 80.27 | 8.79 |
Accuracy, precision and sensitivity for six TC categories using various models.
Model | Accuracy (%) | Precision (%) | Sensitivity (%) |
---|---|---|---|
Ty-VGG16 | 95.26 | 95.32 | 95.27 |
Ty5-CNN | 95.23 | 95.25 | 95.23 |
Ty1-TICAENet | 96.29 | 96.40 | 96.29 |
Ty2-TICAENet | 97.12 | 97.13 | 97.12 |
Ty3-TICAENet | 95.72 | 95.87 | 95.72 |
Ty4-TICAENet | 96.63 | 96.68 | 96.63 |
Ty5-TICAENet | 96.40 | 96.47 | 96.40 |
Confusion matrix of the test samples for the TICAENet model.
One-Hot | 100000 | 010000 | 001000 | 000100 | 000010 | 000001 |
---|---|---|---|---|---|---|
100000 | 440 | 0 | 0 | 0 | 0 | 0 |
010000 | 5 | 414 | 11 | 6 | 1 | 3 |
001000 | 6 | 10 | 422 | 2 | 0 | 0 |
000100 | 3 | 0 | 1 | 428 | 5 | 3 |
000010 | 0 | 0 | 0 | 4 | 433 | 3 |
000001 | 0 | 2 | 0 | 2 | 9 | 427 |
Total | 454 | 426 | 434 | 442 | 448 | 436 |
Precision (%) | 96.92 | 97.18 | 97.24 | 96.83 | 96.65 | 97.94 |
Analysis of the typhoon intensity estimation of satellite cloud images acquired in 2019.
Typhoon Level | Maximum Wind Speed (m·s−1) | Samples | MAE (m/s) | RMSE (m/s) |
---|---|---|---|---|
Tropical depression | 10.8–17.1 | 99 | 7.7 | 8.6 |
Tropical storm | 17.2–24.4 | 29 | 3.7 | 4.2 |
Severe tropical storm | 24.5–32.6 | 27 | 0.4 | 0.1 |
Typhoon | 32.7–41.4 | 18 | 2.4 | 2.8 |
Severe typhoon | 41.5–50.9 | 24 | 4.7 | 5.0 |
Super typhoon | ≥51.0 | 44 | 2.6 | 3.1 |
Comparison of intensity estimates among various methods.
Method | MAE (m/s) | RMSE (m/s) |
---|---|---|
Feature-based decision tree [ |
- | 6.12 |
Statistical method [ |
5.90 | 7.7 |
DAVT [ |
- | 6.66 |
K-nearest-neighbors algorithm [ |
- | 6.53 |
Linear regression of IR features [ |
- | 6.79 |
Ty5-CNN | 7.03 | 8.63 |
TICAENet | 4.78 | 6.11 |
Appendix A
The LeNet-5 model is a feedforward neural network that can quickly respond to nearby coverage networks through artificial neurons and rapidly respond to data. Local join and weight sharing are used to extract features from the original data to construct dense and complete feature vectors [
Appendix B
The VGG16 convolutional neural network is a network structure proposed by the Oxford University Computer Vision Laboratory as part of the 2014 ILSVRC (ImageNet Large Scale Visual Recognition Challenge) competition. To solve ImageNet’s 1000 class image localization and classification task, Simonyan et al. [
References
1. Emmanuel, K. Increasing destructiveness of tropical cyclones over the past 30 years. Nature; 2005; 436, pp. 686-688. [DOI: https://dx.doi.org/10.1038/nature03906] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/16056221]
2. Patricola, C.M.; Wehner, M.F. Anthropogenic influences on major tropical cyclone events. Nature; 2018; 563, pp. 339-346. [DOI: https://dx.doi.org/10.1038/s41586-018-0673-2] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/30429550]
3. Chen, L.L.; Tseng, C.H.; Shih, Y.H. Climate-related economic losses in Taiwan. Int. J. Glob. Warm.; 2017; 11, pp. 449-463. [DOI: https://dx.doi.org/10.1504/IJGW.2017.083670]
4. Qi, W.; Yong, B.; Gourley, J.J. Monitoring the super typhoon lekima by GPM-based near-real-time satellite precipitation estimates. J. Hydrol.; 2021; 603, 126968. [DOI: https://dx.doi.org/10.1016/j.jhydrol.2021.126968]
5. Yang, L.; Liu, M.; Smith, J.A.; Tian, F. Typhoon Nina and the August 1975 Flood over Central China. J. Hydrometeorol.; 2017; 18, pp. 451-472. [DOI: https://dx.doi.org/10.1175/JHM-D-16-0152.1]
6. Chen, S.T. Probabilistic forecasting of coastal wave height during typhoon warning period using machine learning methods. J. Hydroinform.; 2019; 21, pp. 343-358. [DOI: https://dx.doi.org/10.2166/hydro.2019.115]
7. Chang, H.K.; Liou, J.C.; Liu, S.J.; Liaw, S.R. Simulated wave-driven ANN model for typhoon waves. Adv. Eng. Softw.; 2011; 42, pp. 25-34. [DOI: https://dx.doi.org/10.1016/j.advengsoft.2010.10.014]
8. Duan, Y.; Shen, Y.R.; Canbulat, I.; Luo, X.; Si, G.Y. Classification of clustered micro-seismic events in a coal mine using machine learning. J. Rock Mech. Geotech. Eng.; 2021; 13, pp. 1256-1273. [DOI: https://dx.doi.org/10.1016/j.jrmge.2021.09.002]
9. Liu, D.; Liu, H.; Wu, Y.; Zhang, W.; Wang, Y.; Santosh, M. Characterization of geo-material parameters: Gene concept and big data approach in geotechnical engineering. Geosyst. Geoenviron.; 2022; 1, 100003. [DOI: https://dx.doi.org/10.1016/j.geogeo.2021.09.003]
10. Ma, W.; Wu, X.; Chen, X.; Bao, C. Satellite Cloud Maps’ Features of Typhoon Formation. Mar. Forecast.; 2000; 17, pp. 1-10.
11. Su, X. Using Deep Learning Model for Meteorological Satellite Cloud Image Prediction. AGU Fall Meet. Abstr.; 2017; 2017, IN13B-0064.
12. Zhou, J.; Xiang, J.; Huang, S. Classification and Prediction of Typhoon Levels by Satellite Cloud Pictures through GC-LSTM Deep Learning Model. Sensors; 2020; 20, 5132. [DOI: https://dx.doi.org/10.3390/s20185132]
13. Zhao, L.; Chen, Y.; Sheng, V.S. A real-time typhoon eye detection method based on deep learning for meteorological information forensics. J. Real Time Image Process.; 2020; 17, pp. 95-102. [DOI: https://dx.doi.org/10.1007/s11554-019-00899-2]
14. Wang, C.; Zheng, G.; Li, X.; Xu, Q.; Liu, B.; Zhang, J. Tropical cyclone intensity estimation from geostationary satellite imagery using deep convolutional neural networks. IEEE Trans. Geosci. Remote Sens.; 2021; 60, 4101416. [DOI: https://dx.doi.org/10.1109/TGRS.2021.3066299]
15. Zhang, C.J.; Wang, X.J.; Ma, L.M.; Lu, X.Q. Tropical cyclone intensity classification and estimation using infrared satellite images with deep learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.; 2021; 14, pp. 2070-2086. [DOI: https://dx.doi.org/10.1109/JSTARS.2021.3050767]
16. Rüttgers, M.; Lee, S.; You, D. Typhoon track prediction using satellite images in a Generative Adversarial Network. arXiv; 2018; arXiv: 1808.05382[DOI: https://dx.doi.org/10.1038/s41598-019-42339-y]
17. Zou, G.; Qian, H.; Zheng, Z.; Huang, D.; Liu, Z. Classification of Typhoon Grade Based on Satellite Cloud Image and Deep Learning. Remote Sens. Inf.; 2019; 34, pp. 1-6.
18. Zhang, W.; Phoon, K.-K. Editorial for Advances and applications of deep learning and soft computing in geotechnical underground engineering. J. Rock Mech. Geotech. Eng.; 2022; 14, pp. 671-673. [DOI: https://dx.doi.org/10.1016/j.jrmge.2022.01.001]
19. Srivastava, R.K.; Greff, K.; Schmidhuber, J. Training very deep networks. Advances in Neural Information Processing Systems 28, Proceedings of the 29th Annual Conference on Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; Cortes, C.; Lee, D.D.; Garnett, R.; Lawrence, D.N.; Sugiyama, M. Curran Associates, Inc.: Red Hook, NY, USA, 2015.
20. Wang, G.; Gong, J. Facial expression recognition based on improved LeNet-5 CNN. Proceedings of the IEEE 31st Chinese Control and Decision Conference (CCDC); Nanchang, China, 3–5 June 2019; pp. 5655-5660.
21. Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv; 2014; arXiv: 1409.1556
22. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR); Las Vegas, NV, USA, 27–30 June 2016; pp. 770-778.
23. Tan, M.; Le, Q.V. Mixconv: Mixed depthwise convolutional kernels. arXiv; 2019; arXiv: 1907.09595
24. Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. Rethinking the inception architecture for computer vision. Proceedings of the IEEE Conference on Computer Vision and Patten Recognition; Seattle, WA, USA, 27–30 June 2016.
25. Ismanto, H.; Marfai, M.A. Classification Tree Analysis (Gini-Index) Smoke Detection using Himawari_8 Satellite Data Over Sumatera-Borneo Maritime Continent Sout East Asia. IOP Conf. Ser. Earth Environ. Sci.; 2019; 256, 12043. [DOI: https://dx.doi.org/10.1088/1755-1315/256/1/012043]
26. Kikuchi, M.; Murakami, H.; Suzuki, K.; Nagao, T.M.; Higurashi, A. Improved hourly estimates of aerosol optical thickness using spatiotemporal variability derived from Himawari-8 geostationary satellite. IEEE Trans. Geosci. Remote Sens.; 2018; 56, pp. 3442-3455. [DOI: https://dx.doi.org/10.1109/TGRS.2018.2800060]
27. Bankert, R.L.; Cossuth, J. Tropical Cyclone Intensity Estimation via Passive Microwave Data Features. Proceedings of the 32rd Conference on Hurricanes and Tropical Meteorology; San Juan, Puerto Rico, 17–22 April 2016.
28. Lu, X.Q.; Lei, X.T.; Yu, H.; Zhao, B.K. An objective TC intensity estimation method based on satellite data. J. Appl. Meteorol. Sci.; 2014; 25, pp. 52-58.
29. Ritchie, E.A.; Valliere-Kelley, G.; Piñeros, M.F.; Tyo, J.S. Tropical cyclone intensity estimation in the north Atlantic basin using an improved deviation angle variance technique. Weather Forecast.; 2012; 27, pp. 1264-1277. [DOI: https://dx.doi.org/10.1175/WAF-D-11-00156.1]
30. Fetanat, G.; Homaifar, A.; Knapp, K.R. Objective tropical cyclone intensity estimation using analogs of spatial features in satellite data. Weather Forecast.; 2013; 28, pp. 1446-1459. [DOI: https://dx.doi.org/10.1175/WAF-D-13-00006.1]
31. Kossin, J.P.; Knapp, K.R.; Vimont, D.J.; Murnane, R.J.; Harper, B.A. A globally consistent reanalysis of hurricane variability and trends. Geophys. Res. Lett.; 2007; 34, L04815. [DOI: https://dx.doi.org/10.1029/2006GL028836]
32. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE; 1998; 86, pp. 2278-2324. [DOI: https://dx.doi.org/10.1109/5.726791]
33. El-Sawy, A.; EL-Bakry, H.; Loey, M. CNN for Handwritten Arabic Digits Recognition Based on LeNet-5 BT. Proceedings of the International Conference on Advanced Intelligent Systems and Informatics 2016; Cairo, Egypt, 24–26 October 2016; Hassanien, A.E.; Shaalan, K.; Gaber, T.; Azar, A.T.; Tolba, M.F. Springer International Publishing: Cham, Switzerland, 2017; pp. 566-575.
34. Fan, Y.; Rui, X.; Poslad, S.; Zhang, G.; Yu, T.; Xu, X.; Song, X. A better way to monitor haze through image based upon the adjusted LeNet-5 CNN model. Signal Image Video Process.; 2019; 14, pp. 455-463. [DOI: https://dx.doi.org/10.1007/s11760-019-01574-6]
35. Zhang, C.; Yue, X.; Wang, R.; Li, N.; Ding, Y. Study on Traffic Sign Recognition by Optimized Lenet-5 Algorithm. Int. J. Pattern Recognit. Artif. Intell.; 2020; 34, pp. 158-165. [DOI: https://dx.doi.org/10.1142/S0218001420550034]
36. Krinitskiy, M.; Verezemskaya, P.; Grashchenkov, K.; Tilinina, N.; Gulev, S.; Lazzara, M. Deep Convolutional Neural Networks Capabilities for Binary Classification of Polar Mesocyclones in Satellite Mosaics. Atmosphere; 2018; 9, 426. [DOI: https://dx.doi.org/10.3390/atmos9110426]
37. Hridayami, P.; Putra, K.G.D.; Wibawa, K.S. Fish Species Recognition Using VGG16 Deep Convolutional Neural Network. J. Comput. Sci. Eng.; 2019; 13, pp. 124-130. [DOI: https://dx.doi.org/10.5626/JCSE.2019.13.3.124]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
In this paper, a novel typhoon intensity classification and estimation network (TICAENet) is constructed to recognize typhoon intensity. The TICAENet model is based on the LeNet-5 model, which uses weight sharing to reduce the number of training parameters, and the VGG16 model, which replaces a large convolution kernel with multiple small kernels to improve feature extraction. Satellite cloud images of typhoons over the Northwest Pacific Ocean and the South China Sea from 1995–2020 are taken as samples. The results show that the classification accuracy of this model is 10.57% higher than that of the LeNet-5 model; the classification accuracy of the TICAENet model is 97.12%, with a classification precision of 97.00% for tropical storms, severe tropical storms and super typhoons. The mean absolute error (MAE) and root mean square error (RMSE) of the samples estimation in 2019 are 4.78 m/s and 6.11 m/s, and the estimation accuracy are 18.98% and 20.65% higher than that of the statistical method, respectively. Additionally, the model takes less memory and runs faster due to the weight sharing and multiple small kernels. The results show that the proposed model performs better than other methods. In general, the proposed model can be used to accurately classify typhoon intensity and estimate the maximum wind speed by extracting features from geostationary meteorological satellite images.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 State Key Laboratory of Tropical Oceanography, South China Sea Institute of Oceanology, Chinese Academy of Sciences, Guangzhou 510301, China;
2 State Key Laboratory of Tropical Oceanography, South China Sea Institute of Oceanology, Chinese Academy of Sciences, Guangzhou 510301, China;