Content area
Accurate tropical cyclone (TC) intensity estimation is crucial for preventing and mitigating TC-related disasters. Despite recent advances in TC intensity estimation using convolutional neural networks (CNNs), existing techniques fail to adequately incorporate the priori knowledge of TCs. Therefore, information strongly correlated with TC intensity can be obscured by irrelevant data, limiting model performance. To address this challenge, we introduce the Convective-Stratiform Separation Technique, which acts as a physical constraint on the model, to extract pivotal features from the convective core in satellite infrared imagery. Concurrently, we propose a new dual-branch TC intensity estimation model, comprising a "Satellite Imagery Analysis Branch" to extract overall features from satellite imagery and a "Physics-Guided Branch" to analyze the identified convective cores. We further improve the estimation accuracy by incorporating key physical and environmental factors that are often overlooked by the model. We train the model on 1285 TC cases globally during 2003-2016 and evaluate the performance of best-optimized model using an independent test dataset of 95 TC cases globally from 2017. The results show that the root mean square error (RMSE) of TC intensity estimation is 8.13 kt, demonstrating superior performance compared to existing deep learning models.
Abstract: Accurate tropical cyclone (TC) intensity estimation is crucial for preventing and mitigating TC-related disasters. Despite recent advances in TC intensity estimation using convolutional neural networks (CNNs), existing techniques fail to adequately incorporate the priori knowledge of TCs. Therefore, information strongly correlated with TC intensity can be obscured by irrelevant data, limiting model performance. To address this challenge, we introduce the Convective-Stratiform Separation Technique, which acts as a physical constraint on the model, to extract pivotal features from the convective core in satellite infrared imagery. Concurrently, we propose a new dual-branch TC intensity estimation model, comprising a "Satellite Imagery Analysis Branch" to extract overall features from satellite imagery and a "Physics-Guided Branch" to analyze the identified convective cores. We further improve the estimation accuracy by incorporating key physical and environmental factors that are often overlooked by the model. We train the model on 1285 TC cases globally during 2003-2016 and evaluate the performance of best-optimized model using an independent test dataset of 95 TC cases globally from 2017. The results show that the root mean square error (RMSE) of TC intensity estimation is 8.13 kt, demonstrating superior performance compared to existing deep learning models.
Key words: tropical cyclone (TC), convolutional neural network; priori knowledge; remote sensing
(ProQuest: ... denotes formulae omited.)
1 INTRODUCTION
Tropical cyclones (TCs) are one of the most devastating weather systems on Earth. Their life cycle is divided into four stages: generation, development, maturity, and dissipation. The morphological characteristics of a TC's eye evolve through these stages. Intensifying TCs typically exhibit a clear, symmetrical eye with spiral structures; mature TCs feature distinct vortex cloud systems, eyewalls, and spiral rainbands (Wang and Wu 2004); weak TCs, by contrast, exhibit irregular eyewall characteristics. TC intensity, defined as the maximum sustained wind speed near the center of a TC, is important for understanding TC development and evolution, improving TC forecasting, and enabling effective disaster prevention and mitigation (Bloemendaal et al. 2021).
Traditional meteorological approaches for TC intensity estimation include the Dvorak technique (Dvorak 1975), the Advanced Dvorak Technique (ADT) (Olander and Velden 2019), the Deviation-Angle Variance (DAV) (Ritchie et al. 2014) , and the Satellite Consensus technique(SATCON) (Herndon et al. 2010). Among them, the widely used Dvorak technique relies on correlating TC rotation, eye shape, and thunderstorm activity with intensity changes, assuming that similar intensities manifest in comparable satellite cloud patterns. In an effort to reduce the subjective influence in the estimation, Olander and Velden (2019) proposed the ADT, which improves on the Dvorak technique by utilizing a computer-based cloud feature identification algorithm and applying linear regression to determine the storm center. However, ADT faces limitations with weak TCs due to their less pronounced vortex structures. The DAV measures TC cloud structure symmetry by analyzing the gradient of infrared (IR) brightness temperature, establishing a link between TC strength and cloud structure. However, its accuracy can be compromised in cases of strong wind shear shifting the TC center. The SATCON integrates ADT with other polar-orbiting satellite-based methods to offer a hybrid global TC intensity estimation approach. This technique employs a statistically derived weighting scheme to maximize the strengths and minimize the weakness of each technique, thereby generating consistent strength estimates across various TC structures. Despite these advancements, the inherent technical limitations and subjectivity in analyzing TC-related cloud structures continue to restrict the precision of these traditional techniques.
With the rapid development of deep learning technology in image processing, particularly in convolutional neural networks (CNNs) (Krizhevsky et al. 2012; Simonyan and Zisserman 2014; Zeiler and Fergus 2014; Huang et al. 2017), TC intensity estimation has experienced a significant breakthrough. CNNs can autonomously leam features and extract spatial features from satellite imagery, presenting a transformative approach to TC intensity estimation. Combinido et al. (2018) pioneered a pre-trained VGG19 model tailored for TC intensity analysis by fixing its convolutional layers. This groundbreaking work marked the first application of utilizing satellite IR imagery to regress TC intensity. Subsequently, Pradhan et al. (2017) designed a CNNbased architecture for classifying TCs through intensity. They achieved higher accuracy and reduced root mean square error (RMSE) compared to existing methods. Their approach allowed for the visualization of distinct layers of features and their deconvolution, providing insights into their leaming process. Chen et al. (2018) contributed a novel dataset derived from remote sensing satellites to estimate TC intensity from satellite imagery, including four-channels: Visible (VIS), Passive Microwave (PMW) rain rate, Infrared (IR), and Water Vapor (WV). They proposed an AlexNet-based model, marking the first incorporation of domain knowledge (rotational invariance) into the convolutional model to enhance overall performance.
Furthering this work, Chen et al. (2019) achieved superior accuracy in TC intensity estimation by integrating environmental factors, such as latitude and longitude, into their model. Notably, they introduced rotational invariance during testing by rotating each sample by multiple angles and then averaging the resulting predictions. In another study (Lee et al. 2019), 2D- and 3D-CNNs were applied to analyze the relationship between multi-spectral geostationary satellite imagery and TC intensities. Tian et al. (2021) effectively applied 3D-CNN to process multichannel satellite imagery, extracting information from each channel. The addition of the attention mechanism (Tan et al. 2022; Wang et al. 2021; Wang et al. 2022) enhanced feature extraction, enabling models to focus on aspects more relevant to TC intensity. In a recent study, Lee et al. (2021) combined the Dvorak technique with deep learning to exploit hidden correlations within TC imagery. They established connections between query satellite imagery and historical TC event imagery, yielding more accurate and interpretable results for forecasters. Higa et al. (2021) developed an automated Dvorak technique, which uses meteorological domain-based knowledge and preprocessing satellite imagery to refine the representation of cloud distribution around the eye, eyewall and center of the TC. Addressing the importance of wind speed information, Xu et al. (2023) integrated wind scale conversion rules into their network to simulate variations in the wind field during TC intensity estimation.
Despite these advancements in obtaining more reliable estimation results, certain challenges persist. For example, some studies fail to consider a priori knowledge and relevant features commonly used in traditional meteorology. These approaches often input vast information, such as satellite imagery, into the model for learning. Consequently, the model becomes susceptible to interference by extraneous information, potentially leading to biases, even under the influence of attention mechanisms.
Therefore, this study proposes a novel dual-branch CNN model that leverages the capabilities of 3D-CNN to extract features from multi-channel satellite imagery (IR, PMW, WV) and 2D-CNN to extract information from identified convective core imagery, based on validated knowledge. To enhance feature learning, we incorporate two attention modules. Moreover, we integrate partial physical and environmental factors that are highly correlated with TC intensity yet easily ignored by conventional models.
The rest of the paper is organized as follows. Section 2 describes the dataset and data pre-processing. Section 3 introduces our proposed model. Section 4 discusses the results of the experiments and Section 5 summarizes the study.
2 DATA AND PRE-PROCESSING
2.1 Data
This study uses satellite imagery data sourced from the integrated Tropical Cyclone for Image-to-intensity Regression (TCIR) dataset (Chen et al. 2018), which combines data from two publicly available sources: GridSat (Inamdar and Knapp 2015; Knapp et al. 2011) and CMORPH (Joyce et al. 2004). This dataset provides satellite imagery in four channels: IR, WV, VIS and PMW. GridSat contributes the IR, WV, and VIS channels data at 3-hour intervals with a spatial resolution of 0.07° . The PMW channel is sourced from CMORPH, which has the same 3-hour interval, but with a spatial resolution of 0.25°. To ensure a uniform channel scale, the PMW channel is resampled to 0.07° using linear interpolation. Since the VIS channel is extremely unstable, we exclude this channel from the analysis and use only the IR, WV, and PMW channels. The convective core image (CC) is derived from the IR imagery using the CST technique (Adler and Negri 1988). This process is described further in Section 3. The detailed information is shown in Table 1.
2.2 Data pre-processing
We use TC data during 2003-2017 and categorize the TC intensity based on the Saffir-Simpson Hurricane Wind Scale (SSHWS) (Pradhan et al. 2017). The relevant quantities and trajectories are shown in Fig.1 and Table 2. We divide the dataset into training and validation sets (at a ratio of 9:1) and use the 2017 data as the test set to evaluate the performance of the best model. The specific sample information is shown in Table 3. Each frame represents a satellite imagery, centered on the TC core, with a total of 201 x 201 pixels. The resolution is 0.07° in both latitude and longitude, resulting in a frame width and height of 14°. Consequently, the distance between two adjacent pixels is approximately 4 km. Before inputting the data into the model, we horizontally flip the TC imagery in the southern hemisphere to ensure consistency in rotation direction between the northern and southern hemispheres. At the same time, we implement center cropping, random rotation, and normalization of the data. Center cropping eliminates peripheral irrelevant features, preserving the basic TC structure. The initial size of the satellite cloud imagery is reduced from 201x201 pixels to 121x121 pixels. To prevent overfitting during training, we leverage the rotational invariance of TCs by randomly rotating the imagery. Finally, we normalize the data using z-score normalization to improve its comparability.
3 METHODOLOGY
This section introduces the process of extracting convective cores and producing convective core images. We also provide a general overview and details of the proposed model.
3.1 Convective Core
It has been shown that the generation and development of convective systems significantly influence TC intensity changes (Adler and Negri 1988). Lu and Yu (2013) utilized linear regression for intensity estimation by artificially extracting convective cores and validating their correlation with TC intensity based on factors like core count, brightness temperature, and spatial location. Building upon this foundation, Zhou et al. (2023) incorporated convective core features into deep learning to enhance the understanding and guidance for model learning. However, Zhou's work faced a limitation in relying on manually extracted convective core features, introducing a subjective element into the process. Despite introducing convective cores into the network, full utilization of their features was not achieved. In our approach, we go beyond simple extraction and introduce a novel step of generating convective core images after the convective core extraction, as illustrated in Fig. 2. The resulting imagery is cropped to a size of 65x65 to establish a strong correlation between the convective core features and TC intensity within a 135 km radius. After a series of preprocessing steps, these images are fed into the model for learning. Leveraging the properties of convolution, our model mines essential features from the convective core imagery to improve TC intensity estimation. For a satellite IR imagery with each pixel representing its brightness temperature, convective cores are defined as the pixels that satisfy all the following conditions according to:
...
Where CTi,j is the temperature of the convective core in Kelvin and (i,j) represents the pixel point in the IR imagery. CTi-1,j CTi-1,j CTi,j+1 and CTi,j-1 represent the temperatures of the surrounding pixels, and Neighbors; ; denotes the set of neighboring pixels around (7,7). According to the CST technique, we consider pixels that satisfy the above equation to be the sought convective cores. Meanwhile, we further extract three convective core features: the total number of convective cores (Num), and the highest cores CTmax and lowest CTmin brightness temperature of the convective cores.
3.2 Proposed Model
The overall architecture of Tropical Cyclone Intensity Estimation Network (TCIE-Net) is shown in Fig.3, comprising three main modules: the dual-branch module, the attention module, and the intensity estimation module. In the dual-branch module, a combination of 3D-CNN and 2D-CNN is employed to extract and learn features from satellite cloud imagery and convective core images, respectively. This is because the three channels of satellite cloud imagery contain a large amount of information. The utilization of a 3D-CNN for feature extraction is preferred over a 2D-CNN to ensure effective leveraging of individual channel information (Tian et al. 2022). To enhance the model's ability to recognize and learn more valuable information, we adopt the approach of branch attention mechanisms (Woo et al. 2018). This involves using an attention mechanism to put the features extracted from both branches into the Channel Focus Module (CFM) and Spatial Focus Module (SFM) for further feature extraction. After the fusion of features by channel and spatiotemporal feature extraction, the fused features are input into the intensity estimation module along with environmental factors, including the TC center coordinates, and physical factors, mainly including the temperature and quantity information of convective cores, to estimate the current TC intensity. 3.3 Channel Focus Module
The attention mechanism plays a pivotal role in effectively filtering features, allowing the model to focus on important information within satellite imagery. In our design, we integrate the features extracted from satellite imagery into the CFM for a weighted recalibration. Satellite imagery serves as the source of attention in this process.
The CFM operates in two main steps: squeeze and excitation. During the squeeze phase, a series of convolution, pooling, and other convolution operations are applied to the feature maps. This sequence refines the extracted satellite imagery features, enabling the module to capture global channel information more effectively. The resulting feature maps are then flattened to obtain Fj, a representation of the refined features.
In the subsequent excitation phase, the feature map undergoes further processing to learn inter-channel dependencies and to compute the associated weights, denoted by W. These weights reflect the importance of each channel and are used to recalibrate the original features. Specifically, the weights Wj are applied to Fj, through an element-wise multiplication, yielding the final output of the CFM, defined as L1 =Fl,Wl.
Additionally, we incorporate an average pooling operation in the convolution process within the CFM, which facilitates the rapid compression of global information. This step is essential for adjusting interchannel information and enhancing the recalibration effectiveness in subsequent stages.
3.4 Spatial Focus Module
To enhance spatial feature extraction from convective core images, we introduce the SFM, designed to refine the preliminarily extracted features of the convective core. This module enables the model to effectively learn spatial distance and structural features of the convective core.
In the squeeze phase, the convective core image features are further refined using a 2D-CNN, allowing for more effective extraction of spatial features within the convective core. The refined feature map is then passed to different branches; first, the feature map is flattened to obtain fl representing a fine-grained spatial feature representation.
In the excitation phase, the feature map undergoes further convolutional processing, followed by three fully connected layers to compute the spatial feature weights w; , which indicate the importance of cach spatial feature within the convective core. These weights w, are then applied to the original spatial features through elementwise multiplication, yielding the final output of the SFM, defined as l2 =fl wl.
To preserve the spatial integrity of the convective core features, no pooling operation is applied within the SFM, which is crucial for precise attention calibration. This design choice ensures that the spatial structure and intricate patterns of the convective core are maintained, enabling the SFM to perform fine-grained, spatial-level attention processing.
3.5 Evaluation Indicators
To quantify the difference between the model predictions and the true value, we use RMSE and MAE loss functions to calculate the error between the estimated value and the true value. The specific loss function formulas are shown below:
...
Here, n represents the number of samples. Y; denotes the actual observed value of the i-th sample, and Yestimate stands for the estimated value of the corresponding sample. RMSE amplifies the impact of extreme errors by taking the square root of the average squared estimation residuals, thereby placing greater emphasis on the overall deviation between estimated and actual values. MAE , conversely, calculates the average of the absolute estimation residuals, reflecting the average absolute deviation between estimates and observations. These two metrics assess estimation accuracy from different perspectives, where lower values indicate superior model estimation performance.
4 EXPERIMENTS AND DISCUSSIONS
This section performs the ablation experiment of the model to verify the validity of different data as well as environmental and physical factors. We also run different combinations of the components to find the optimal model. At the same time, we verify the validity of the optimal model through a series of result visualizations. Finally, we show the state-of-the-art capabilities of the model by comparing it with traditional methods and previous studies.
4.1 Ablation experiments
To assess the efficacy of each component of the best model, we devise six models (M1, M2, M3, M4, M5, M6) for experimentation, and the results are shown in Table 4. Among these models, SIAB refers to the Satellite Imagery Analysis Branch, PGB refers to the Physics-Guided Branch, and P represents the temperature and quantity information of CC. MI serves as the baseline model, adhering to the experimental findings of Tian et al. (2021), where information from the three-channel satellite imagery is extracted using 3D-CNN and directly enters the fullyconnected layer for intensity estimation without passing through the attention module. In M2, we introduce CFM into the model to further extract global channel information. Experimental results indicate a 2.7% improvement after incorporating channel attention, underscoring the ability of our designed attention module to learn inter-relationships among channels. In M3, the convective core image 15 integrated into the model to assist intensity estimation after extracting feature extraction via 2D-CNN. Compared with MI, the effect is enhanced by 6.9% after adding the convective core, emphasizing the significant role of a priori knowledge in aiding intensity estimation. To validate the efficacy of SFM, we integrate this module into M4. Results indicate that while M3 successfully extracts convective core features, SFM enables the model to capture finer spatial location features of convective cores. Finally, we augment the network with information that is easily ignored or difficult for the model to learn. M5 introduces physical factors, and M6 incorporates environmental factors, which indicate basin information to some extent. The results reveal a further reduction in error with the addition of physical factors, indicating the influence of convective core count and temperature information on intensity estimation. Moreover, including latitude and longitude information results in a modest improvement in accuracy, indicating the model's recognition of regional variations in TCs.
4.2 Performance analysis
The proposed method of using convective cores to assist TC intensity estimation achieves better results.
Figure 4a demonstrates the scatter plot of 2017 best track intensities against the model-estimated intensities, with an RMSE of 8.13 kt. The red line in the plot represents the linear fit of both intensities, expressed by the equation y=0.93x+1.94. This demonstrates the model's high degree of accuracy on the independent test set. Fig. 4b shows that most data deviations cluster around 0, indicating that our model's estimates closely align with the actual values.
4.3 Analysis of the results of different types of TC intensity estimation
To assess the estimation effectiveness for different TC categories, we compute the RMSE and Bias for each category, ranging from NC to H5, as shown in Table 5. The models show distinct performance levels across the different TC categories. Our model exhibits the best results for the TD category, while challenges arise as intensity progresses to HS, where rare and irregular TC variations lead to less accurate estimations. Additionally, it is evident that the RMSE increases with intensity. We attribute this trend to the unevenly distributed samples, resulting in less training data for high-intensity TCs and, therefore, a lack of accurate estimation.
For a more comprehensive visual representation, Fig.5 shows the bias results for each type of TC more visually using box plots, with categories ranging from NC to HS. The median bias, interquartile range, and distribution reveal that the model performs better in lower TC categories with biases close to 0, while higher categories (H5) tend to have more negative biases, indicating an underestimation of intensity in stronger storms. The increasing spread in higher categories also suggests greater variability in predictions as TC intensity rises, highlighting areas for potential model improvement in extreme cases.
4.4 Correlation analysis of physical factors
Our ablation experiments provide evidence that environmental and physical factors significantly impact intensity estimates. In order to assess the correlation between the incorporated factors and TC intensity, we calculate the correlation between TC intensity and Num, ??max and CTmin The resulting correlation matrix (Fig.6) offers insights into the relationships among these variables.
We calculate the correlation coefficients for both the training and test sets to show the influence of different factors on the TC estimation during model learning and independent testing. As illustrated in the figure, Num shows the highest correlation with TC intensity, confirming that the number of convective cores has the strongest relationship with intensity within a range of 135 km, as observed in Knapp et al. (2011). A higher Num implies more convective activity, which correlates with increased TC intensity. In addition, CT, ,, and CT min demonstrate a strong negative correlation with intensity, suggesting that lower convective core temperatures are indicative of deeper convection in the region and, consequently, higher TC intensity. For the positional features learned by the model, the distance between the convective cores and the TC center reflects the distribution of these cores around the TC. Proximity corresponds to the tightness of the TC organizational structure, where a closer convective core indicates a tighter TC structure, resulting in stronger intensity. Meanwhile, the correlation patterns vary between the datasets, with the training and validation set exhibiting stronger correlations between these variables and intensity compared to the test set. This discrepancy 1s attributed to the larger sample size of the training set, making correlations more evident. Despite the weaker correlation in the test set, our model, having learned the relationship between these factors and the intensity during training, still performs effectively during testing.
4.5 Individual Case Analysis
In order to demonstrate the operational feasibility of our model, we select four TCs from different oceans for estimation in 2017. Specifically, HARVEY (the 9th TC in the Atlantic Ocean), JOSE (the 12th TC in the Atlantic Ocean), KENNETH (the 13th TC in the Eastern Pacific Ocean), and NORU (the 7th TC in the Western Pacific Ocean are chosen to demonstrate the model's applicability across different times and geographic regions. The evolution of the four TCs is shown in Fig. 7.
To assess the model's performance, we compare the model-estimated intensities with those from the best track, ADT, and SATCON. We find that our model consistently outperforms the other methods throughout the entire life cycle of the TCs, exhibiting a significant generalization ability. This emphasizes the efficacy of our model in providing accurate intensity estimations across diverse geographical and temporal contexts.
4.6 Comparison with existing methods
We compare the performance of our model (TCIENet) with traditional techniques and other recent deep learning models, all of which focus on intensity estimation using satellite imagery (Table 6). From the table, it is clear that deep learning methods generally outperform traditional numerical approaches. In terms of the data used, most researchers use multi-channel satellite data, and the effectiveness of these methods tends to improve with an increasing number of channels. This improvement may be attributed to the enhanced complementary information between channels. For estimated accuracy, our models performance surpasses that of the recent models, highlighting its efficacy in TC intensity estimation. This outcome reflects the superior accuracy and applicability achieved by our model, reinforcing its position as an advanced and effective tool in the field.
5 CONCLUSION
This study introduces the TCIE-Net, a novel TC intensity estimation model incorporating physical-guided constraints. Based on previous studies, we introduce a priori knowledge of TC intensity (convective core) into the model. This relevant a priori knowledge is transformed into physical factors to guide the model. In terms of data processing, we use 3D-CNN to extract features from threechannel satellite cloud imagery, which is crucial for TC intensity estimation. 3D-CNN can effectively extract timeseries features across channels and TCs. Additionally, we introduce convective cores, derived from IR imagery, as inputs to another branch to guide the intensity estimation. The positional, quantitative, and temperature information from convective cores are closely related to the TC intensity. Specifically the convective positional information reflect the tightness of the TC organization, the quantitative information reflect the convective exuberance of the TCs, and the temperature information reflect, to some extent, the height of the TC cloud tops. We use 2D-CNN to extract the location and structural information of convection. For the features-quantity and temperature information, which are not directly or difficult to access by the model, we add them as physical factors to the network to further utilize the knowledge of convective cores. Meanwhile, we utilize an attention mechanism to perform more refined feature extraction, further improving the model's ability to estimate TC intensity.
While our fixed retrieval range (135 km) for convective cores yields optimal results, the variability in the maximum wind radius of the TC at different moments suggests potential improvements. A dynamic retrieval based on the cloud system distribution in the satellite imagery could further improve the models capability to estimate TC intensity. In future work, we plan to expand the array of relevant physical factors, creating datasets rich in physical information to guide TC intensity estimation. This will be a valuable resource for guiding TC intensity estimation, fostering collaborative research between data scientists and meteorologists. Joint efforts utilizing existing TC image intensity estimation datasets will propel advancements in this field to new heights.
REFERENCES
Adler, R. F., and A. J. Negri, 1988: A satellite infrared technique to estimate tropical convective and stratiform rainfall. J. Appl. Meteor. Climatol., 27, 30-51, https://doi.org/10.1175/ 1520-0450(1988)027<0030:ASITTE>2.0.CO;2.
Bloemendaal, N., H. de Moel, 1. M. Mol, P. R. Bosma, A. N. Polen, and J. M. Collins, 2021: Adequately reflecting the severity of tropical cyclones using the new tropical cyclone severity scale. Environ. Res. Lett, 16, 014048, https://doi. org/10.1088/1748-9326/abd131.
Chen, B., B. F. Chen, and H. T. Lin, 2018: Rotation-blended CNNs on a new open dataset for tropical cyclone imagery-tointensity regression. Proc. 24th ACM SIGKDD Int. Conf. on Knowledge Discovery & Data Mining, 90-99, https://doi. org/10.1145/3219819.3219926.
Chen, В. Е, В. Chen, H. -T. Lin, and R. L. Elsberry, 2019: Estimating tropical cyclone intensity by satellite imagery utilizing convolutional neural networks. Wea. Forecasting, 34, 447-465, https://doi.org/10.1175/WAF-D-18-0136.1
Combinido, J. S., J. В. Mendoza, and J. Aborot, 2018: A convolutional neural network approach for estimating tropical cyclone intensity using satellite-based infrared imagery. Proc. 24th Int. Conf. on Pattern Recognition (ICPR), https://doi.org/10.1109/ICPR.2018.8545593.
Dvorak, V. F, 1975: Tropical cyclone intensity analysis and forecasting from satellite imagery. Mon. Wea. Rev., 103, 420- 430, https://doi.org/10.1175/1520-0493(1975)103<0420: TCIAAF>2.0.C0;2.
Herndon, D., C. Velden, J. Hawkins, T. Olander, and A Wimmers, 2010: The cimss satellite consensus (SATCON) tropical cyclone intensity algorithm. Proc. 29th Conf. on Hurricanes and Trop. Meteorol.
Higa, M., and Coauthors, 2021: Domain knowledge integration into deep learning for typhoon intensity classification. Sci. Rep., 11, 12972, https://doi.org/10.1038/s41598-021-92286-w.
Huang, G., Z. Liu ,L. Van Der Maaten, and К. Q. Weinberger, 2017: Densely connected convolutional networks. Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 4700-4708, https://doi.org/10.1109/CVPR.2017.243.
Inamdar, A. K., and К. В. Knapp, 2015: Intercomparison of independent calibration techniques applied to the visible channel of the ISCCP BI data. J A#mos. Oceanic Technol, 32, 1225-1240, https://doi.org/10.1175/TTECH-D-14-00040.1.
Jiang, W., G. Hu, T. T. Wu, L. В. Liu, В. Kim, and Y. О. Xiao, 2023: DMANet KF: Tropical cyclone intensity estimation based on deep learning and Kalman filter from lulti-spectral infrared imagery. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens, 16, 4469-4483, https://doi.org/10.1109/JSTARS.2023.3273232.
Joyce, В. J., J. Е. Janowiak, P. A. Arkin, and P. P. Xie, 2004: CMORPH: A method that produces global precipitation estimates from passive microwave and infrared data at high spatial and temporal resolution. J. Hydrometeor., 5, 487-503, https://doi.org/10.1175/1525-7541(2004)005<0487: CAМТРC>2.0.CO;2.
Knapp, К. R., and Coauthors, 2011: Globally gridded satellite observations for climate studies. Bull. Amer. Meteor. Soc., 92, 893-907, https://doi.org/10.1175/2011BAMS3039.1.
Krizhevsky, А. I. Sutskever, and C. Е. Hinton, 2017: Imagerynet classification with deep convolutional neural networks. Communications of the ACM, 60, 84-90, https://doi.org/ 10.1145/3065386.
Lee, J., J. Im, D. -H. Cha, H. Park, and S. Sim, 2020: Tropical cyclone intensity estimation using multi-dimensional convolutional volutional neural networks from geostationary satellite data. Remote Sens., 12, 108, https://doi.org/10.3390/rs12010108.
Lee, Y. J, D. Hall, ©. Liu, W. W. Liao, and M. C. Huang, 2021: Interpretable tropical cyclone intensity estimation using Dvorak-inspired machine learning techniques. Eng. Appl Artif. Intell, 101, 104233, https://doi.org/10.1016/.engappai.2021.104233.
Lu, X., and H. Yu, 2013: An objective tropical cyclone intensity estimation model based on digital IR satellite imagery. Trop. Cyclone Res. Rev, 2, 233-241, https://doi.org/10.6057/2013TCRR04.05.
Olander, T. L., and C. $. Velden, 2019: The advanced DVORAK technique (ADT) for estimating tropical cyclone intensity: update and new capabilities. Wea. Forecasting, 34, 905-922, https://doi.org/10.1175/WAF-D-19-0007.1.
Pradhan, R., R. S. Aygun, M. Maskey, R. Ramachandran, and D. J. Cecil, 2017: Tropical cyclone intensity estimation using a deep convolutional neural network. IEEE Trans. Imagery Process., 27, 692-702, https://doi.org/10.1109/TIP.2017.2766338.
Ritchie, E. A., K. M. Wood, O. G. Rodríguez-Herrera, M. F. Pifieros, and J. $. Tyo, 2014: Satellite-derived tropical cyclone intensity in the north pacific ocean using the deviation-angle variance technique. Wea. Forecasting, 29, 505-516, https://doi.org/10.1175/WAF-D-13-00133.1.
Simonyan, K., and A Zisserman, 2014: Very deep convolutional networks for large-scale imagery recognition. arXiv:1409.1556, https://doi.org/10.48550/arXiv.1409.1556.
Tan, J., Q. Yang, J. Hu, О. Huang, and $. Chen, 2022: Tropical Cyclone Intensity Estimation Using Himawari-8 Satellite Cloud Products and Deep Learning. Remote Sens., 14, 812, https://doi.org/10.3390/rs14040812.
Tian, W. , X. Zhou, W. Huang, Y. Zhang, P. Zhang, and S. Hao, 2022: Tropical Cyclone Intensity Estimation Using Multidimensional Convolutional Neural Network From Multichannel Satellite Imagery. IEEE Geosci. Remote Sens. Lett., 19, 1-5, https://doi.org/10.1109/lgrs.2021.3134007.
Tian, W. , X. Zhou, X. Niu, L. Lai, Y. Zhang, and К. T. C. L. К. Sian, 2023: A Lightweight Multitask Learning Model With Adaptive Loss Balance for Tropical Cyclone Intensity and Size Estimation. IEEE I. Sel. Top. Appl. Earth Obs. Remote Sens., 16, 1057-1071, https://doi.org/10.1109/jstars.2022.3225154,
Tian, W. , L. Lai, X. Niu, X. Zhou, Y. Zhang, and К. T. C. Lim Kam Sian, 2023a: Estimating tropical cyclone intensity using dynamic balance convolutional neural network from satellite imagery. J. Appl. Remote Sens., 17, 024513, https://doi.org/10.1117/1jrs.17.024513.
Tian, W. , L. Lai, X. Niu, X. Zhou, Y. Zhang, and К. T. C. L. К. Kenny, 2023b: Estimation of Tropical Cyclone Intensity Using Multi-Platform Remote Sensing and Deep Leaming with Environmental Field Information. Remote Sens., 15, 2085, https://doi.org/10.3390/1815082085.
Wang, C. , G. Zheng, X. Li, Q. Xu, B. Liu, and J. Zhang, 2022: Tropical Cyclone Intensity Estimation From Geostationary Satellite Imagery Using Deep Convolutional Neural Net works. IEEE Trans. Geosci. Remote Sens., 60, 1-16, https://doi.org/10.1109/tgrs.2021.3066299.
Wang, Y. , and C.-C. Wu, 2004: Current understanding of tropical cyclone structure and intensity changes : A review. Meteor. Atmos. Phys., 87, 257-278, https://doi.org/10.1007/s00703003-0055-6.
Woo, S. , J. Park, I.-Y. Lee, and I. $. Kweon, 2018: CBAM: Convolutional Block Attention Module. Lecture Notes in Computer Science, 3-19, https://doi.org/10.1007/978-3-03001234-2_1.
Xu, G. , Y. Li, C. Ma, X. Li, У. Ye, О. Lin, Z. Huang, and S. Chen,2023: TFG-Net: Tropical cyclone intensity estimation from a fine-grained perspective with the graph convolution neural network. Eng. Appl. Artif. Intell., 118, 105673, https://doi.org/10.1016/j.engappai.2022.105673.
Zeiler, M. D. , and В. Fergus, 2014: Visualizing and Understanding Convolutional Networks. Lecture Notes in Computer Science, 818-833, https://doi.org/10.1007/978-3-31910590-1_53.
Zhou, Z. , Y. Zhao, Y. Qing, W. Jiang, Y. Wu, and W. Chen, 2023: A Physics-guided NN-based Approach for Tropical Cyclone Intensity Estimation. Proc. 2023 SIAM Int. Conf. on Data Mining (SDM), 388-396, https://doi.org/10.1137/1.9781611977653.ch44.
Copyright Guangzhou Institute of Tropical & Marine Meteorology 2025