Content area
With the intensification of climate change, frequent short-duration heavy rainfall events exert significant impacts on human society and natural environment. Traditional rainfall recognition methods show limitations, including poor timeliness, inadequate handling of imbalanced data, and low accuracy when dealing with these events. This paper proposes a method based on CD-Pix2Pix model for inverting short-duration heavy rainfall events, aiming to improve the accuracy of inversion. The method integrates the attention mechanism network CSM-Net and the Dropblock module with a Bayesian optimized loss function to improve imbalanced data processing and enhance overall performance. This study utilizes multisource heterogeneous data, including radar composite reflectivity, FY-4B satellite data, and ground automatic station rainfall observations data, with China Meteorological Administration Land Data Assimilation System (CLDAS) data as the target labels fror the inversion task. Experimental results show that the enhanced method outperforms conventional rainfall inversion methods across multiple evaluation metrics, particularly demonstrating superior performance in Threat Score (TS, 0.495), Probability of Detection (POD, 0.857), and False Alarm Ratio (FAR, 0.143).
Abstract: With the intensification of climate change, frequent short-duration heavy rainfall events exert significant impacts on human society and natural environment. Traditional rainfall recognition methods show limitations, including poor timeliness, inadequate handling of imbalanced data, and low accuracy when dealing with these events. This paper proposes a method based on CD-Pix2Pix model for inverting short-duration heavy rainfall events, aiming to improve the accuracy of inversion. The method integrates the attention mechanism network CSM-Net and the Dropblock module with a Bayesian optimized loss function to improve imbalanced data processing and enhance overall performance. This study utilizes multisource heterogeneous data, including radar composite reflectivity, FY-4B satellite data, and ground automatic station rainfall observations data, with China Meteorological Administration Land Data Assimilation System (CLDAS) data as the target labels fror the inversion task. Experimental results show that the enhanced method outperforms conventional rainfall inversion methods across multiple evaluation metrics, particularly demonstrating superior performance in Threat Score (TS, 0.495), Probability of Detection (POD, 0.857), and False Alarm Ratio (FAR, 0.143).
Key words: short-duration heavy rainfall; inversion; CD-Pix2Pix
CLC number: P409 Document code: A
(ProQuest: ... denotes formulae omited.)
1 INTRODUCTION
With the intensification of global climate change, extreme weather events, particularly short-duration heavy rainfall, are increasing in both frequency and intensity, posing serious challenges to human society and natural environment (Tabari 2020; Myhre et al. 2019). Such rainfall events can trigger natural disasters like floods and landslides, significantly impacting agricultural production, urban infrastructure, and daily life (Kendon et al. 2014; Trenberth 2011). In this context, accurately identifying heavy rainfall events using meteorological data, especially radar and satellite data, is crucial for mitigating the potential hazards of these extreme events (Westra et al. 2014).
Radar and satellite remote sensing technologies are two pillars of meteorological research, providing valuable data resources for inverting these extreme weather events (Hou et al. 2014; Fowler et al. 2007). Radar, by emitting and receiving electromagnetic waves, can provide highresolution real-time information on rainfall intensity and movement (Wei and Hsu 2021). Satellite remote sensing, on the other hand, covers a broader geographical area, offering a variety of useful information such as cloud cover, water vapor content, and precipitation estimates (Habib et al. 2012). However, despite their value, existing methods still face significant challenges in heavy rainfall event inversion.
Traditional rainfall estimation methods primarily rely on empirical models and physical equations, which are inefficient and lack the precision required to accurately characterize short-duration heavy rainfall events (Hou et al. 2014; Huffman et al. 2014). For instance, radar-based rainfall estimation often depends on the Z-R relationship (the relationship between radar reflectivity and rainfall rate), but the applicability of this relationship can be limited by regional climatic variations, leading to significant estimation errors under certain conditions (Doviak 1993). For example, Peng et al. (2022) demonstrated that the standard Z-R relationship yielded substantial errors when applied to extreme weather events in Northern China. Additionally, the use of satellite data faces limitations in timeliness and accuracy, particularly for rainfall estimation at night or under cloud cover (Peng et al. 2022).
In recent years, the application of machine learning and deep learning techniques in the meteorological field has shown great potential in processing and analyzing large-scale meteorological data (Reichstein et al. 2019; Shi et al. 2017). Furthermore, Lazri et al. (2020) proposed a machine learning method based on a multi-classifier model, which significantly improved the accuracy of rainfall estimation from MSG satellite data in northern Algeria. However, the performance of such multi-classifier models can be substantially degraded when trained on insufficient or low-quality datasets, ultimately compromising their predictive accuracy.
Additionally, Moraux et al. (2021) proposed a multimodal and multitask deep learning model that utilizes satellite and rain gauge data to estimate instantaneous rainfall rates. This model, employing an encoder-decoder convolutional neural network architecture, excelled in rainfall detection and rainfall rate estimation. Nonetheless, due to its high complexity, the model demands significant computational resources and still has limitations in handling data imbalance.
The sudden and spatially localized nature of heavy rainfall makes it challenging for even state-of-the-art deep learning models to attain very high accuracy (Shi et al. 2015). Moreover, the training and validation processes of machine learning models usually require large amounts of accurately labeled meteorological data (Rasp et al. 2020; Schultz et al. 2021). For extreme events like short-duration heavy rainfall, acquiring high-quality data is relatively difficult, thereby intensifying the difficulties associated with robust model training. Additionally, the complexity and computational cost of these models limit their widespread application, especially in short-term weather scenarios requiring rapid response.
This study aims to address the aforementioned issues, particularly focusing on the performance deficiencies of machine learning models in heavy rainfall event inversion. To this end, we propose an improved deep learning model, CD-Pix2Pix, which integrates the attention mechanism network (CSM-Net) to enhance its ability to capture details of heavy rainfall events. It also incorporates the Dropblock module to prevent overfitting (Ghiasi et al. 2018; Zoph et al. 2018) and incorporates a Bayesian optimized loss function to improve the model's ability to handle imbalanced data, thereby specifically enhancing the inversion capability for short-duration heavy rainfall events.
2 DATA SOURCES
2.1 Multi-source heterogeneous data
This study utilizes composite reflectivity (CREF) data, which is crucial for characterizing the distribution and size of water droplets in the atmosphere. The CREF data, provided in NetCDF format, capture the intensity of radar wave reflections offraindrops, offering essential information for detecting and analyzing heavy rainfall events. The utilization of CREF data enables the model to effectively identify rainfall areas and estimate rainfall intensity.
The water vapor channel data from the FY-4B satellite is a significant input for this study. The FY-4B satellite, one of the latest second-generation geostationary meteorological satellites deployed by the China Meteorological Administration (CMA), represents a major advancement in China's meteorological satellite technology. Equipped with advanced sensors and higher performance detection equipment compared to its predecessors, the FY-4B provides more detailed and high-resolution meteorological data. Its water vapor channel employs advanced imaging technology to continuously (24/7) monitor cloud and atmospheric water vapor distribution, which is crucial for inverting extreme weather events such as short-duration heavy rainfall.
Additionally, this study leverages rainfall data from ground-based automatic weather stations as another key input. The rainfall data from these automatic stations provide precise information on rainfall intensity and the timing of rainfall events, aiding the model in accurately evaluating and analyzing rainfall events at the ground observation level. During the deep learning model training and the inversion of heavy rainfall events, the grid information from radar and satellite data is combined with the site data from ground automatic stations. As highprecision ground observation data, automatic station rainfall data mainly exists in the form of site data, recording the rainfall amount at specific geographical locations. Since automatic station data is point-based and radar and satellite data are gridded, we first align the time resolution by matching the automatic station data (1 hour) with the processed radar and satellite data to ensure temporal consistency across all data sources. For spatial alignment, Kriging interpolation is applied to map the station data onto the radar and satellite grid (0.01°×0.01°), estimating rainfall at each grid point. Subsequently, the nearest grid point to each station is then identified, thereby combining point-based station data with gridded radar and satellite data. This method ensures spatial and temporal consistency across diverse data sources for model training and inversion tasks.
This study adopts high-quality rainfall data provided by the China Meteorological Administration Land Data Assimilation System (CLDAS) as labels to guide the training of the radar and satellite observation data-based heavy rainfall event estimation model. The CLDAS system, which utilizes extensive ground and satellite observation resources and advanced data assimilation technique to generate high spatial and temporal resolution rainfall estimation data. These data provide important information for analyzing rainfall events, detailing rainfall intensity, spatial distribution, and duration. It is noteworthy that the CLDAS products usually have a certain delay, with the specific delay time depending on the data processing and release procedures. Typically, the update delay for CLDAS rainfall data is about 2-3 hours. However, the CD-Pix2Pix model developed in this study is capable of generating accurate rainfall products within 7 minutes (Fig. 1), ensuring timely estimation of heavy rainfall events, which is critical for operational nowcasting and emergency response.
The data used in this study are normalized using minmax normalization, with the formula:
... (1)
where xmin and xmax represent the minimum and maximum values in the dataset, respectively. This normalization scales the data to the range [0,1], ensuring consistent value ranges across different types of data. Additionally, radar, satellite, and CLDAS data are processed using bilinear interpolation to achieve a uniform spatial resolution of 0.01°×0.01°, with a time resolution of 1 hour. This processing ensures the data is consistently scaled and aligned for model training. The study region spans from 113° to 123°E and from 20° to 32°N, as shown in Fig. 2. This is a suitable region for studying heavy rainfall due to its complex climate and terrain features. It frequently experiences heavy rainfall events triggered by monsoons, typhoons, and local topography effects.
2.2 Dataset construction
The training dataset consists of data from the years 2022 and 2023. We use stratified sampling to ensure that the distribution of heavy rainfall events and other weather conditions is consistent across the training, validation, and test sets. The stratification is based on the intensity of rainfall events and seasonal variations to ensure a balanced representation of these key meteorological conditions. This strategy ensures that the model encounters diverse rainfall patterns during training, thereby improving its generalization capability across various weather scenarios. We allocate 70% of the data for training (6051 instances), 20% for validation (1729 instances), and the remaining 10% (865 instances) for testing. The training set is used for training the model and identify the characteristics of heavy rainfall events. The validation set is used during training to adjust model parameters and select the best-performing model, ensuring it accurately captures complex rainfall patterns and avoids overfitting. The validation set is fixed at the outset and remains unchanged throughout training. To prevent data leakage, the training, validation, and test sets are strictly partitioned with no temporal or spatial overlap. The test set is used to evaluate the model's performance in real-world scenarios, verifying its ability to generalize and accurately invert data on previously unseen instances. This data allocation strategy ensures the model's stability and reliability in handling heavy rainfall event inversion under various temporal and seasonal conditions.
3 CD-PIX2PIX MODEL
3.1 CSM-Net
CSM-Net is an advanced attention mechanism network introduced in this study to enhance the representation of meteorological image features. It combines Multi-Head Attention, Channel Attention, and Spatial Attention mechanisms to improve the model's performance in processing meteorological images.
Multi-Head Attention Mechanism captures the relationships between different feature dimensions using multiple parallel attention heads, which enhances the model's ability to understand complex meteorological patterns. It characterizes rainfall features across multiple scales and directions during the inversion of heavy rainfall events, thus improving the accuracy of identifying heavy rainfall regions.
Channel Attention Mechanism boosts the model's feature representation by learning the importance of different channels. It emphasizes crucial channel information that reflects rainfall intensity and distribution when processing heavy rainfall images, enabling the model to detect changes in rainfall details more accurately.
Spatial Attention Mechanism selectively emphasizes the most critical positions in the image, enhancing the model's ability to recognize rainfall areas. In heavy rainfall inversion tasks, it targets areas with the highest rainfall concentration, reducing background noise interference and improving the accuracy of inversion results.
To visually illustrate the structure and function of CSM-Net, Fig. 3 presents its network architecture. This network optimizes the image-processing workflow and enhances the capability of the CD-Pix2Pix model to accurately translate input images to target images. By incorporating multi-head attention, channel attention, and spatial attention mechanisms, CSM-Net captures the multi-scale features of heavy rainfall events. It identifies subtle changes in rainfall intensity and distribution, highlights significant rainfall features, enhances the model's sensitivity to heavy rainfall regions, and reduces background noise interference. Consequently, these improvements lead to more accurate inversion results.
3.2 Dropblock
To improve the robustness and generalization of the CD-Pix2Pix model in processing meteorological images, especially for identifying heavy rainfall events, the model integrates Dropblock technology into its generator architecture. Dropblock is a regularization technique that mitigates overfitting and boosts generalization on unseen data by temporarily removing blocks of neurons during training. This process encourages the model to learn more generalized feature representations by preventing reliance on specific neuron activations. Specifically, applying Dropblock technology in the CD-Pix2Pix model helps the generator handle noise and anomalies present in meteorological images, thereby improving the model's robustness to extreme rainfall events. By strategically integrating Dropblock into the model's highly complex encoderdecoder structure, the proposed model retains critical meteorological features while suppressing overfitting.
3.3 CD-Pix2Pix model
To visually illustrate the integration of these enhancements into the CD-Pix2Pix model, Fig. 4 presents the improved model architecture. The CDPix2Pix model builds upon the standard Pix2Pix framework and includes several key improvements. It comprises two main components: a generator and a discriminator. The generator takes input data and produces images of rainfall that closely resemble the actual rainfall images. The discriminator evaluates the generated images to determine whether they are real or created by the generator. This adversarial training process drives the generator to continuously enhance the quality of the rainfall images it produces.
In the generator, CSM-Net is strategically placed after the first dual-convolutional layer and after each encoder module. This arrangement amplifies the impact of important features in the radar composite reflectivity and FY-4B satellite water vapor channels while suppressing less important features, optimizing the utilization of features extracted from the original images throughout the network. Therefore the model more effectively segregates meteorologically significant signals from noise and preserves these enhanced features throughout the upsampling path, leading to superior heavy rainfall inversion accuracy, thereby improving the accuracy of heavy rainfall inversion. CSM-Net enhances the model's sensitivity to important features and optimizes the efficiency of feature extraction. model' generalization ability, preventing overfitting.
4 LOSS FUNCTION
In the task of inverting heavy rainfall events, the design of the loss function is pivotal to the model's final performance. The CD-Pix2Pix model uses discriminator loss and generator loss functions. The discriminator is responsible for distinguishing between generated data and real data, while the generator focuses on producing realistic heavy rainfall inversion results and addressing the issue of class imbalance in the dataset.
4.1 Discriminator loss function
In the CD-Pix2Pix model, the task of the discriminator is to distinguish between generated data and real data. To achieve this, the discriminator aims to maximize the accuracy of identifying real data while minimizing the accuracy of identifying generated data. The steps for the discriminator loss function are as follows:
... (2)
... (3)
... (4)
The Eq. (2) shows that the discriminator aims to maximize the output probability D(x) for real data x, indicating that the discriminator is trying to classify real data as accurately as possible. D(x) represents the discriminator's output probability for real data, and Ex~Pdate(x)is the expectation over the real data distribution. The Eq. (3) shows that the discriminator is designed to minimize the output probability D(G(z)) for generated data G(z), meaning the discriminator is trying to classify generated data as fake. D(G(z)) represents the probability that generated data is classified as real, and Ez~p (z) z is the expectation over the noise distribution used by the generator. The Eq. (4) summarizes the objective: the discriminator's total loss function is the combination of maximizing the correct classification of real data and minimizing the classification of generated data. By minimizing this loss function, the discriminator improves its ability to distinguish between real and generated data, enhancing overall performance of the model.
4.2 Generator loss function
In the CD-Pix2Pix model, the generator's main task is to generate realistic heavy rainfall inversion results and address the class imbalance in the dataset. The generator loss function consists of two main components: adversarial loss and binary focal loss.
4.2.1 ADVERSARIAL LOSS
Adversarial loss is the core part of the generator's loss function, used to generate data that is responsible for producing rainfall fields that are consistent with the observed deceive the discriminator. The generator's goal is to maximize the probability that the discriminator assigns to the generated data (z), making it difficult for the discriminator to distinguish between real and generated data. The adversarial loss for the generator is expressed as:
...(5)
By minimizing this loss, the generator continually improves and produces more realistic rainfall inversion results, thereby deceiving the discriminator.
4.2.2 BINARY FOCAL LOSS
In the inversion of heavy rainfall events, class imbalance is a significant problem because such events are relatively rare in the dataset, while light rain and no rain data are more abundant. This class imbalance leads to insufficient recognition ability for heavy rainfall events during training, affecting the accuracy of the final inversion results. The Binary Focal Loss is designed to mitigate the common issue of class imbalance in deep learning. It modifies the standard cross-entropy loss to enhance the model's ability to learn from minority class samples and reduce the excessive focus on majority class samples. The core idea of this loss function is to assign an adjustment factor to each sample, which decreases as the probability of the sample being correctly classified increases. Thus, the model tends to focus more on samples that are difficult to classify correctly. The specific expression is as follows:
...(6)
pt is the predicted probability of the model for positive samples. For negative samples, the probability is given by 1 pt. t is a hyperparameter used to balance the weights of positive and negative samples, which can be adjusted according to the specific conditions of the dataset. is the focusing parameter that controls the strength of the adjustment factor. A higher will make the model focus more on samples that are difficult to classify correctly.
4.2.3 COMBINED LOSS FUNCTION
The total loss function of the generator is the sum of the adversarial loss and the binary focal loss, as follows:
... (7)
By directly combining the adversarial loss and the binary focal loss, the generator is able to generate realistic rainfall inversion results while addressing class imbalance issues, thereby improving the recognition of heavy rainfall events.
5 EXPERIMENTAL DESIGN AND RESULTS
5.1 Experimental environment and model training
This study aims to significantly enhance the inversion accuracy of heavy rainfall events by constructing a deep learning model. To fully utilize computational resources and improve training efficiency, the batch size of the model was set to 16, and the Adam optimizer was chosen with an initial learning rate of 0.0001. Considering the risk of overfitting and to ensure the model is saved in its optimal state, an early stopping mechanism was adopted, where training is terminated and the best-performing model weights are saved if the model's performance on the validation set does not improve for 20 consecutive epochs.
When evaluating the model's performance in inverting heavy rainfall events, this study focuses on key metrics such as TS score, POD, and MAR. Understanding the specific definitions of rainfall amount and intensity is prerequisite for a thorough comprehension of the model evaluation results. The following table (Table 1) details the different grades of rainfall amount and their corresponding rainfall intensities. These standards quantify and classify the severity of rainfall events, furnishing explicit benchmarks for the specific evaluation of model performance.
5.2 Performance evaluation of model architecture
To assess the performance of the proposed CDPix2Pix model in the inversion of heavy rainfall events, this study uses TS and POD as primary evaluation metrics. To thoroughly understand the improvements of the proposed model, it is compared with the original Pix2PixGAN model, Pix2Pix (Qu et al. 2019), SmaAt- Unet (Trebing et al. 2021), and R2-Unet (Moustafa et al. 2021) models. The following table presents the performance comparison of each model under different evaluation metrics, clearly demonstrating the advantages and improvements of the proposed model over other models. The performance comparison of the improved model is shown in the table below.
By comparing the data in Table 2, it can be observed that the improved Pix2PixGAN model proposed in this study shows significant advantages in the inversion of heavy rain (25 mm and above) and rainstorm (50 mm and above) events. Specifically, the proposed CD-Pix2Pix model achieved TS-25 and POD-25 scores of 0.495 and 0.857, respectively, for the inversion of heavy rain events, and TS-50 and POD-50 scores of 0.426 and 0.774, respectively, for the inversion of rainstorm events. Regarding the MAR metric, the CD-Pix2Pix model showed MAR-25 and MAR-50 scores of 0.143 and 0.226, respectively, for heavy rain and rainstorm events, which are significantly lower than those of other models. This indicates that the CD-Pix2Pix model constitutes a lower proportion of actual rainfall events that were incorrectly omitted, resulting in more accurate and reliable inversion results. Collectively, the improved Pix2PixGAN model proposed in this study performs substantially in the inversion of heavy rain and rainstorm.
5.3 Hyperparameter tuning based on bayesian optimization
Directly setting parameters may not fully account for the model's performance under different data distributions and can easily lead to unstable model performance. To fully utilize the model's capability in handling complex meteorological data, this study adopts a dynamic parameter adjustment strategy based on Bayesian optimization. Bayesian optimization efficiently searches the parameter space within limited computational resources to find the optimal parameters for the binary focal loss, specifically the weight (search range: 0.6 to 1) and the focusing parameter . The Bayesian optimization process is limited to 30 iterations to ensure thorough exploration of the search space without excessively consuming computational resources. By employing a Gaussian Process Model to represent the loss function's performance, Bayesian optimization effectively captures variations in model performance under different parameter settings, guiding the search towards the optimal parameter combination.
Firstly, define the objective function. The goal of Bayesian optimization is to find the focal loss hyperparameters and that maximize the TS score. The objective function is defined as:
...(8)
Next, a Gaussian Process (GP) is used to represent the distribution of the objective function. Let f (x) be the TS score under the parameters ( , ), where GP approximates the model's performance with a mean µ(x) and covariance (x):
...(9)
Then, expected improvement (EI) guides Bayesian optimization to select the next hyperparameter point. It is calculated using the current best score f (x·) as follows:
..(10)
By maximizing EI(x), Bayesian optimization selects the next point for evaluation. Then, after observing the TS score of the new hyperparameter combination, the Gaussian Process model is updated:
...(11)
This process is repeated until the stopping condition is met. Finally, identify the hyperparameters · and · that maximize the TS score.
5.4 Case study analysis
To further illustrate the advantages of the proposed CD-Pix2Pix model over the comparison models in the inversion of heavy rainfall events, this study selected representative inversion results for detailed visualization comparison analysis. These cases were selected for their typicality and extremity, which illustrate the performance differences of the models in handling rainfall events of varying intensities. These cases not only cover different grades of rainfall, such as heavy rain and rainstorms, but also represent rainfall conditions across different geographical regions and meteorological conditions. This helps in comprehensively evaluating the model's generalization ability and adaptability. The comparison of inversion images between the improved model and other models is shown in Fig. 5.
Through the comparison in Fig. 5, which represents the heavy rainfall event on June 26, 2023, in Southeast China, the CD-Pix2Pix model outperforms other models in several key aspects, particularly in accurately capturing the boundaries of heavy rainfall events. During this time, Southeast China was in its summer rainy season, and the stationary Meiyu front caused an interaction between warm, moist air and cold air, leading to widespread rainfall. Frequent convective weather resulted in intense local rainfall and short-duration heavy downpours. The CDPix2Pix model produces more precise and detailed representations of rainfall patterns, especially in edge areas where other models tend to overestimate the extent of rainfall. Unlike the broader and less distinct rainfall regions generated by other models, CD-Pix2Pix provides more focused results, minimizing false detections and yielding more reliable and refined inversion outcomes.
As shown in Fig. 6, which represents the rainfall event in April 2024, the CD-Pix2Pix model aligns well with the labeled rainfall data. During this period, Southeast China was in a transitional season characterized by frequent alternations between warm and cold air masses, typical of spring. Such conditions often lead to unstable weather patterns and precipitation events, providing an ideal scenario for evaluating the model's performance. For rainfall ≥25 mm, the model achieves a POD of 0.89 and a MAR of 0.11. For rainfall ≥50 mm, the POD is 0.81 and the MAR is 0.19, indicating reasonable performance in identifying heavier rainfall events. Fig. 7, also from April 2024, further supports these findings, showing consistent performance across different rainfall intensities, confirming the model's ability to capture significant rainfall patterns.
6 CONCLUSION
This paper presents a method for inverting heavy rainfall events using the CD-Pix2Pix model. By integrating the distributed CSM-Net module and refining the loss function, this study aims to improve the model's accuracy and robustness in processing complex meteorological data. The main conclusions are as follows. First, the CD-Pix2Pix model with its optimized loss function significantly enhances the model's ability to capture essential meteorological features. The CSM-Net module employs multi-head, channel, and spatial attention mechanisms to fine-tune the model's response to critical features, thereby strengthening its capacity to identify key elements in heavy rainfall events. Furthermore, Dropblock technology mitigates overfitting by randomly "dropping" blocks of neurons during training, which improves the model's generalization to unseen data, thereby enhancing its robustness in handling complex meteorological conditions. Finally, this study highlights the considerable potential of deep learning models, particularly the CDPix2pix model, in meteorological applications, for the inversion of heavy rainfall events in meteorological applications.
Despite these improvements, the CD-Pix2Pix model's development is still constrained by the limited availability of labeled heavy rainfall data, limiting further enhancements in accuracy and generalization. Future work should focus on exploring more efficient data augmentation techniques to mitigate data scarcity, as well as incorporating semi-supervised learning to leverage both labeled and unlabeled data. Improving the model's performance on imbalanced datasets and its ability to handle extreme weather conditions is essential for broader meteorological applications.
REFERENCES
Doviak, R. J., and D. S. Zrnić, 1993: Doppler Radar and Weather Observations. Academic Press, 562 pp.
Fowler, H. J., S. Blenkinsop, and C. Tebaldi, 2007: Linking climate change modelling to impacts studies: Recent advances in downscaling techniques for hydrological modelling. Int. J. Climatol., 27, 1547-1578, https://doi.org/ 10.1002/joc.1556.
Ghiasi, G., T.-Y. Lin, and Q. V. Le, 2018: DropBlock: A regularization method for convolutional networks. Proc. Adv. Neural Inf. Process. Syst., 31, 10750-10760.
Habib, E., D. Haile, Y. S. Gebremichael, T. Dinku, Y. T. Assefa, and D. Tadesse, 2012: Evaluation of the high-resolution CMORPH satellite rainfall product using dense rain gauge observations and radar-based estimates. J. Hydrometeor., 13, 1784-1798, https://doi.org/10.1175/JHM-D-12-017.1.
Hong, E. F. Stocker, and D. B. Wolff, 2007: The TRMM multisatellite precipitation analysis (TMPA): Quasi-global, multiyear, combined-sensor precipitation estimates at fine scales. J. Hydrometeor., 8, 38-55, https://doi.org/10.1175/JHM560.1.
Hou, A. Y., and Coauthors, 2014: The global precipitation measurement mission. Bull. Amer. Meteor. Soc., 95, 701- 722, https://doi.org/10.1175/BAMS-D-13-00164.1.
Huffman, G. J., and Coauthors. Senior, 2014: Heavier summer downpours with climate change revealed by weather forecast resolution model. Nat. Climate Change, 4, 570-576, https:// doi.org/10.1038/nclimate2258.
Lazri, M., K. Labadi, J.-M. Brucker, and A. Soltane, 2020: Improving satellite rainfall estimation from MSG data in Northern Algeria by using a multi-classifier model based on machine learning. J. Hydrol., 584, 124705, https://doi.org/ 10.1016/j.jhydrol.2020.124705.
Moraux, A., S. Dewitte, B. Cornelis, and A. Munteanu, 2021: A deep learning multimodal method for precipitation estimation. Remote Sens., 13, 3278, https://doi.org/10.3390/ rs13163278.
Moustafa, M. S., S. A. Mohamed, S. A. Sayed, and A. H. Nasr, 2021: Hyperspectral change detection based on modification of UNet neural networks. J. Appl. Remote Sens., 15, 028505, https://doi.org/10.1117/1.JRS.15.028505.
Myhre, G., and Coauthors, 2019: Frequency of extreme precipitation increases extensively with event rareness under global warming. Sci. Rep., 9, 16063, https://doi.org/10.1038/ s41598-019-52277-4.
Peng, W., S. Bao, K. Yang, J. Wei, X. Zhu, Z. Qiao, Y. Wang, and Q. Li, 2022: Radar quantitative precipitation estimation algorithm based on precipitation classification and dynamical Z-R relationship. Water, 14, 3436, https://doi.org/10.3390/ w14213436.
Qu, Y., Y. Chen, J. Huang, and Y. Xie, 2019: Enhanced Pix2pix dehazing network. Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 8160-8168.
Rasp, S., P. D. Dueben, S. Scher, J. A. Weyn, S. Mouatadid, and N. Thuerey, 2020: WeatherBench: A benchmark data set for data-driven weather forecasting. J. Adv. Model. Earth Syst., 1 2 , e 2 0 2 0 M S 0 0 2 2 0 3 , h t t p s : / / d o i . o r g / 1 0 . 1 0 2 9 / 2020MS002203.
Reichstein, M., G. Camps-Valls, B. Stevens, M. Jung, J. Denzler, N. Carvalhais, and Prabhat, 2019: Deep learning and process understanding for data-driven Earth system science. Nature, 566, 195-204, https://doi.org/10.1038/s41586-019-0912-1.
Schultz, M. G., C. Betancourt, B. Gong, F. Kleinert, M. Langguth, L. H. Leufen, A. Mozaffari, and S. Stadtler, 2021: Can deep learning beat numerical weather prediction? Philos. Trans. R. Soc. A, 379, 20200097, https://doi.org/ 10.1098/rsta.2020.0097.
Shi, X., Z. Chen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-C. Woo, 2015: Convolutional LSTM network: A machine learning approach for precipitation nowcasting. Proc. Adv. Neural Inf. Process. Syst., 28, 802-810.
Shi, X., Z. Gao, L. Lausen, H. Wang, D.-Y. Yeung, W.-K. Wong, and W.-C. Woo, 2017: Deep learning for precipitation nowcasting: A benchmark and a new model. Proc. Adv. Neural Inf. Process. Syst., 30, 5617-5627.
Tabari, H., 2020: Climate change impact on flood and extreme precipitation increases with water availability. Sci. Rep., 10, 13768, https://doi.org/10.1038/s41598-020-70816-2.
Trebing, K., T. Stańczyk, and S. Mehrkanoon, 2021: SmaAt- UNet: Precipitation nowcasting using a small attention-UNet architecture. Pattern Recognit. Lett., 145, 178-186, https:// doi.org/10.1016/j.patrec.2021.01.036.
Trenberth K., 2011: Changes in precipitation with climate change. Climate Res., 47, 123-138, https://doi.org/10.3354/ cr00953.
Wei, C.-C., and C.-C. Hsu, 2021: Real-time rainfall forecasts based on radar reflectivity during typhoons: Case study in southeastern Taiwan. Sensors, 21, 1421, https://doi.org/ 10.3390/s21041421.
Westra, S., H. J. Fowler, J. P. Evans, L. V. Alexander, F. Johnson, E. J. Kendon, G. Lenderink, and N. M. Roberts, 2014: Future changes to the intensity and frequency of short-duration extreme rainfall. Rev. Geophys., 52, 522-555, https://doi.org/ 10.1002/2014RG000464.
Zoph, B., V. Vasudevan, J. Shlens, and Q. V. Le, 2018: Learning transferable architectures for scalable image recognition. Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, 8697-8710, https://doi.org/10.1109/CVPR.2018.00907.
Copyright Guangzhou Institute of Tropical & Marine Meteorology 2025